The Privacy and Civil Liberties Oversight Board is is an advisory body to assist the President and other senior Executive branch officials in ensuring that concerns with respect to privacy and civil liberties are appropriately considered in the implementation of all laws, regulations, and executive branch policies related to war against terrorism. On November 12th the Board is holding an open meeting to discuss "Defining Privacy." According to the Board: "While the Board will address the definition of privacy in the context of government counterterrorism programs, it is also interested in what conceptual interests are involved in the protection of privacy, how the impact of technology has affected privacy, what privacy interests have been identified by government privacy officials, what lessons have been learned in the private sector, and what the best way is for government to address privacy concerns. "
I have been invited to speak on the first panel. My remarks are entitled "Privacy as a Utilitarian Value" and they reflect my views on how we should best conceive of privacy (and therefore how best to protect it). The text of my remarks is as follows:
Mr. Chairman, Members of the board, thank you very much for the opportunity to speak with you today about the future of privacy. I appreciate the opportunity to share my thoughts with you.
It’s entirely appropriate for the board begin a discussion of privacy in the new technological age. Indeed it is essential in my judgment for this Board and others within the government to think about how privacy should be reconfigured in light of the twin challenges of new threats and new technology. The fair information practice principles or FIPs were, when first conceived, a commendable and useful guide to protecting privacy. But they were created in 1973. Today they are antiques. The 1973 Thunderbird was a marvelous car but we would not think of holding it out today as a state-of-the-art example of automotive engineering. Nor should we think of the FIPs as the state-of-the-art of privacy thinking. What we need in effect is a new Tesla for privacy.
So what should this new privacy Tesla look like? There are many ways to answer that question, but from my perspective the right way to start is to think hard about what privacy is and why we value it. In my judgment privacy is not an ontological value. It is not an inherent human right or the product of some natural law.
Rather I see privacy as an instrumental value, one that acts in service of other societal values. In other words privacy is in my view a utilitarian value that derives its worth only in so far as it fosters other positive social gains. Privacy for its own sake is only an assertion of fictitious autonomy and a vain attempt to withdraw from society. It has value principally insofar as it advances other objectives.
I realize that in saying this I put myself in stark disagreement with others on this panel and likely with members of the board. But nonetheless I think it is the right way to approach the question of what privacy is and why we value. And that, in turn, let’s us think more clearly about how to protect it. Let me elaborate:
The problem really is that buried in the word “privacy” are many different social values that we are fostering – too many really to catalog. For example we often see privacy as enhancing our freedom from government observation. That’s probably the use of privacy that’s most salient to this board.
But privacy fosters any number of other values. It’s an enabler of democracy -- that’s why we keep the ballot private. It can foster personal morality -- that’s why we keep the confessional private. Privacy is also about restraining government misbehavior, which is why we see privacy values in the fourth amendment and other procedural limitations on government action – another way in which privacy is relevant to this Board’s conduct.
Privacy is also sometimes about transparency, in the sense that we have privacy rules so that I know what you know about me. Privacy can be about control -- control of my own image; control of information about me; control of how one is perceived by others. Privacy is sometimes about shame since one ground of privacy is that it enables me to hide from view certain peccadilloes or conduct of which I’m not proud. In yet another sense privacy is at the cornerstone of some of our most liberal values of limited government because it serves as a theoretical underpinning for the rule that we are not, generally, obliged to justify our conduct to the government.
What’s important to note is that in all of these instances the value that were protecting is different from privacy itself. And that, in turn, suggests that often there might other ways to protect the underlying value than through the imposition of privacy rules. We might limit the government’s opportunity to oblige you to justify your conduct directly. And there are certainly other ways to protect against governmental abuse through mechanisms like audit and oversight.
What all this means for me is that at the micro level we need to understand the nuance that arises from the multivariate nature of privacy. If we understand its variegated nature then we also might think that we need a variable set of structures to foster the values we are seeking to advance. In some cases that may be very strict control formal judicial review and law. In others it might be thought that administrative controls are adequate.
In the end what it really comes down to a detailed cost benefit analysis that is derived from an appreciation both of what the underlying values privacy serves in a particular context and the costs of that protection the harms to be avoided.
After all judicial scrutiny for example comes at some significant costs. Those are not just the costs that arise from the marginal decrease in the activity being scrutinized. Rather things like predicate requirements and judicial scrutiny mean litigation. It means that reasonable judgments will be subject to post hoc review and this will in turn require detailed record-keeping and more rigid and formalized processes. These are significant costs that in the end need to be weighed and balance against the countervailing privacy gains and against the all underlying values and concerns that are animating our discussion. To put it prosaically if every exercise of a security function, say secondary screening at an airport, were to be accompanied by new paperwork and the prospect of judicial view then the natural and inevitable consequence would be a reduction in the amount of screening – quite possibly below that which we might think was the optimal amount.
So all that is a useful theoretical framework. What does it mean in practice? What does it mean for our Tesla?
What it means, I think, is that each privacy intrusion is different. The degree of the intrusion matters; the consequences of the intrusion to the data subject matter; the harm that were seeking to avoid from the intrusion matters; and the process that we put in place to adjudicate the intrusion has to be matched to all of these.
Let me give you a few examples of how I think this plays out. Consider for example the value of candor in conversation – it’s a value that we advance by extending a form of privacy to the attorney-client discussion. But we don’t protect that value with a process; rather we protect it with a privilege against disclosure. Now you can imagine other ways of protecting the attorney-client privilege for example by requiring judicial warrants issued on probable cause before the privilege is breached by the government. Instead, given its salience, and our perception of how important the underlying candor is to advancing the process of justice, we make the privilege nearly absolute. We’ve chosen not an administrative mechanism nor procedural authorization mechanism but rather a rule of evidence and it seems to work perfectly fine.
Likewise we have decided that a series of administrative processes are more than adequate to protect ashame and restraint on government observation. I would venture the hypothesis that this is because we perceive the privacy intrusion as relatively minimal and the value on the other side -- of airport safety -- as particularly high. Leaving aside questions of whether those assessments are accurate, it seems that the valuation reflects a generally shared social consensus.
How then are we to think about things that are more in the remit of this Board? Things like of the metadata program or the 702 program operated by NSA? The thrust of the analysis I’ve suggested is that they are very different programs that give rise to different considerations. They differ, for example, in the degree of collection and in the potential for misuse. Perhaps most importantly, I think the underlying value we are trying to protect in each is different as well – the 215 program is much more about the avoidance of governmental scrutiny and abuse; while the privacy interest in the 702 program seems to me more related to the value of international norms and questions about the morality of espionage vel non. I would expect our privacy answer to be different for each program.
In particular, since the 215 program more directly impacts issues of government abuse, we would likely prefer that program to be scrutinized more deeply by an independent review mechanism than we might find necessary for the 702 program. Equally, we would want a more robust error correction mechanism for the 215 program since the inevitable mistakes would directly affect American citizens.
If I may, let me also say a brief word about the need for transparency. I completely agree with others on this panel that transparency is essential to control of government conduct and misconduct. But the critical question is “what type of transparency?” For me, again, this requires us to ask what transparency is for – and for me the ground of transparency is oversight and audit. Transparency without that ground is mere voyeurism.
But absolute transparency cannot be squared with the need for secrecy in operational programs. I sometimes think that some calls for transparency (though not, of course, by members of this panel) are really just coded efforts to discontinue surveillance programs altogether. The simple truth is that if we believe in absolute transparency then we have gone a long way to the view that democracies can’t have secrets – a view I reject as untenable in the modern world.
What is necessary, in the end, is delegated transparency – a willingness to rely on our elected representatives, the executive and the judiciary to use internal and external oversight mechanisms on our behalf. That answer will not satisfy some who have no trust for the government at all – but it is the traditional mechanism by which we have managed confidences since the Founding.
Let me offer one further thought – about the role of this Board. With respect, the cost benefit analysis that I see at the heart of modern privacy question typically involves the weighing of values that are generally incommensurate. My personal perception of the intrusiveness of electronic screening is different I think from yours and from others on the panel. My concerns over security are likewise different from anybody else. That’s human nature. As economists like to say, individual utility functions vary.
So I think one additional lesson that I would take from the multivariate and utilitarian nature of privacy is that the utilitarian calculus should be done by the most representative body possible. And with respect that’s not this Board it’s Congress typically (and maybe the executive branch since that is the only branch that is headed by somebody who has been elected by all Americans). But it isn’t a body of experts who think that they know what privacy really is and what people really want from privacy. In the end because I think privacy is an instrumental value I also think that the decision about how instrumental and how utilitarian should be by some broadly representative institution.
Let me close by saying that I recognize that my views are unlikely to persuade some on the Board and many in the public. The reason, of course, is that if you think that privacy is an ontological value -- a fundamental natural right, if you will -- then nothing that I can say is likely to convince you and nothing that I can offer as an alternative in terms of process is likely to satisfy either. But it’s important to recognize the ontological nature of that disagreement. It is in essence a matter of faith not a matter of practical dispute. And that I think would be unfortunate because the threats we face have changed we should not I think be so rigid as to advance privacy above all other values. To be sure, privacy is an important instrumental tool for advancing critical social structures. But the our new Tesla of privacy protection needs to be based on an assessment of what those structures are, not on a view of privacy as some platonic ideal.