As part of my blood oath to spend the next few years writing on subjects other than detention, I have just released this paper on the inadequacy of privacy as a conceptual framework for regulating personal digital information. The paper is quite exploratory, and I would be very interested in reader reaction. Here is the Introduction:
The question of privacy lies at, or just beneath, the surface of a huge range of contemporary policy disputes. It binds together the American debates over such disparate issues as counter-terrorism and surveillance, online pornography, abortion, and targeted advertising. It captures something deep that a free society necessarily values in our individual relations with the state, with companies, and with one another. And yet we see a strange frustration emerging in our debates over privacy, one in which we fret simultaneously that we have too much of it and too little. This tendency is most pronounced in the counter-terrorism arena, where we routinely both demand—with no apparent irony—both that authorities do a better job of “connecting the dots” and worry about the privacy impact of data-mining and collection programs designed to connect those dots. The New Republic on its cover recently declared 2010 “The Year We Were Exposed” and published an article by Jeffrey Rosen subtitled “Why Privacy Always Loses.” By contrast, in a book published earlier in 2010, former Department of Homeland Security policy chief Stewart Baker described privacy concerns as debilitating counter-terrorism efforts across a range of areas:
even after 9/11, privacy campaigners tried to rebuild the wall [between intelligence and law enforcement] and to keep DHS from using [airline] reservation data effectively. They failed; too much blood had been spilled. But in the fields where disaster has not yet struck—computer security and biotechnology—privacy groups have blocked the government from taking even modest steps to head off danger.
Both of these theses cannot be true. Privacy cannot at once be always losing—a value so at risk that it requires, for so Rosen contends, “a genuinely independent [government] institution” dedicated to its protection—and be simultaneously impeding the government from taking even “modest steps” to prevent catastrophes.
Unless, that is, our concept of privacy is so muddled, so situational, and so in flux, that we are not quite sure any more what it is or how much of it we really want.
In this paper, I explore the possibility that technology’s advance and the proliferation of personal data in the hands of third parties has left us with a conceptually outmoded debate, whose reliance on the concept of privacy does not usefully guide the public policy questions we face. And I propose a different vocabulary for that debate—a concept I call “databuse.” When I say here that privacy has become obsolete, to be clear, I do not mean this in the crude sense that we have as a society abandoned privacy in the way that, say, we have abandoned once-held moral anxieties about lending money for interest. Nor do I mean that we have moved beyond privacy in the sense that we moved beyond the need for a constitutional protection against the peacetime quartering of soldiers in private houses without the owner’s consent. Privacy still represents a deep value in our society and in any society committed to liberalism.
Rather, I mean to propose something more precise, and more subtle: that the concept of privacy as we have traditionally understood it in law no longer describes well or completely the actual value at stake in the set of issues we continue to argue in privacy’s name. The notion of privacy was always vague and hard to pin down as an operational matter in law. But this problem has grown dramatically worse as a result of the proliferation of data about all of us and the ability to analyze and cross-reference that data systematically and instantly. To put the matter bluntly, the concept of privacy will no longer bear the weight we are placing upon it. And because the term covers such a huge range of ground, its imprecision with respect to these new problems creates great indeterminacy as to what the value we are trying to protect really is, whether it is gaining or losing ground, and whether that is a good thing or a bad.
In this paper, I examine privacy’s conceptual obsolescence with respect only to a single area, albeit one that is by itself hopelessly sprawling: data about individuals held in the hands of third parties. Our lives, as I have elsewhere argued, are described by a mosaic of such data—an ever-widening array of digital fingerprints reflecting nearly all of life’s many aspects. Our mosaics record our transactions, our media consumption, our locations and travel, our communications, and our relationships. They are, quite simply, a detailed portrait of our lives—vastly more revealing than the contents of our underwear drawers yet protected by a weird and incoherent patchwork of laws that reflect no coherent value system. We tend to discuss policy issues concerning control over our mosaics in the language of privacy for the simple reason that privacy represents the closest value liberalism has yet articulated to the one we instinctively wish in this context both to protect and to balance against other goods—goods such as commerce, security, and the free exchange of information. And there is no doubt an intuitive logic to the use of the term in this context. If one imagines, for example, the malicious deployment of all of the government’s authorities to collect the components of a person’s mosaic and then the use of those components against that person, one is imagining a police state no less than if one imagines an unrestricted power to raid people’s homes. If one imagines the unrestricted commerce in personal information about people’s habits, tastes, and behaviors—innocent and deviant alike—one is imagining an invasion of personal space as destructive of a person's privacy as the breaking into that person's home and the selling of all the personal information one can pilfer there.
Yet the construction of these issues as principally implicating privacy is not inevitable; indeed, privacy itself is not inevitable as a legal matter. It was, as I shall argue, created in response to the obsolescence of previous legal constructions designed to shield individuals from government and one another, and it was created because technological developments made those earlier constructions inadequate to describe the violations people were feeling. Ironically, today it is privacy itself that no longer adequately describes the violations people are feeling with respect to the mosaic—and it describes those violations less and less well as time goes on. Much of the material that makes up the mosaic, after all, involves records of events that take place in public, not in private; driving through a toll booth or shopping at a store, for example, are not exactly private acts. Most mosaic data is sensitive only in aggregation; it is often trivial in and of itself—and we consequently think little of giving it, or the rights to use it, away. Indeed, mosaic data by its nature is material we have disclosed to others, often in exchange for some benefit, and often with the understanding, implicit or explicit, that it would be aggregated and mined for what it might say about us. It takes a feat of intellectual jujitsu to construct a cognizable and actionable set of privacy interests out of the amalgamation of public activities which one transacted knowingly with a stranger in exchange for a benefit. The term privacy has become a crutch—a description of many different values of quite-different weights—that does not usefully describe the harms we fear.
The more sophisticated privacy scholars and advocates appreciate this. In his exhaustive effort to create a “Taxonomy of Privacy,” Daniel Solove argues up front that “The concept of ‘privacy’ is far too vague to guide adjudication and lawmaking” and that “it is too complicated a concept to be boiled down to a single essence.” Rather, he treats privacy as “an umbrella term, referring to a wide and disparate group of related things.” Just how wide becomes clear over the course of his 84-page article. His taxonomy contains four principal parts, each consisting of multiple subparts—creating, all in all, a 16-part typology that ranges from blackmail to data “aggregation” and “decisional interference.” And he concedes in the end that although all of the privacy harms he identifies “are related in some way, they are not related in the same way—there is no common denominator that links them all.” Solove’s heroic effort to salvage privacy’s coherence through comprehensive cataloguing has the unintended effect of revealing its unsalvagability.
My purpose here is to propose a different vocabulary for discussing the mosaic—in some ways a simpler, cruder one, but one that both more accurately describes than privacy our behavior with respect to the mosaic and that offers more useful guidance than the concept of privacy does as to what activities we should and should not tolerate. The relevant concept is not, in my judgment, protecting some elusive positive right of user privacy but, rather, protecting a negative right—a right against the unjustified deployment of user data in a fashion adverse to the user's interests, a right, we might say, against databuse. The databuse conception of the user’s equity in the mosaic is more modest than privacy. It doesn’t ask to be “let alone.” It asks, rather, for a certain protection against tangible harms as a result of a user’s having entrusted elements of his or her mosai c to a third party. Sometimes, to be sure, these tangible harms will implicate privacy as traditionally understood, but sometimes, as I will explain, they will not. Think of it as a right to not have your data rise up and attack you.
Thinking about mosaic questions we currently debate in the language of privacy in terms of databuse has a clarifying effect on a number of contemporary public policy disputes. In some cases, it will tend to suggest policy outcomes roughly congruent with those suggested by a more conventional privacy analysis. In other cases, by contrast, it suggests both more and less aggressive policy interventions and market developments on behalf of users. In some areas, it argues for a complacent attitude towards data uses and acquisitions that have traditionally drawn the skeptical eye of privacy activists. Yet it also suggests more intense focus on a subset of privacy issues that are currently under-emphasized in privacy debates—specifically, issues that genuinely implicate personal security.