Databuse and a Trusteeship Model of Consumer Protection in the Big Data Era

Coauthored with Wells C. Bennett

How much does the relationship between individuals and the companies in which they entrust their data depend on the concept of “privacy?” And how much does the idea of privacy really tell us about what the government does, or ought to do, in seeking to shield consumers from Big Data harms?

There is reason to ask. Privacy is undeniably a deep value in our liberal society. But one can acknowledge its significance and its durability while also acknowledging its malleability. For privacy is also something of an intellectual rabbit hole, a notion so contested and ill-defined that it often offers little guidance to policymakers concerning the uses of personal information they should encourage, discourage, or forbid. Debates over privacy often descend into an angels-on-the-head-of-a-pin discussion. Groups organize around privacy. Companies speak reverently of privacy and have elaborate policies to deliver it—or to justify their manipulations of consumer data as consistent with it. Government officials commit to protecting privacy, even in the course of conducting massive surveillance programs. And we have come to expect as much, given the disagreement in many quarters over what privacy means.

The invocation of privacy mostly serves to shift discussion, from announcing a value to addressing what that value requires. Privacy can tell a government or company what to name a certain policy after. But it doesn’t answer many questions about how data ought to be handled. Moreover, in its broadest conception, privacy also has a way of overpromising—of creating consumer expectations on which our market and political system will not, in fact, deliver. The term covers such a huge range of ground that it can, at times, suggest protections in excess of what regulators are empowered to enforce by law,what legislators are proposing, and what companies are willing to provide consistent with their business models.

In 2011, one of us suggested that “technology’s advance and the proliferation of personal data in the hands of third parties has left us with a conceptually outmoded debate, whose reliance on the concept of privacy does not usefully guide the public policy questions we face.” Instead, the paper proposed thinking about massive individual data held in the hands of third-party companies with reference to a concept it termed “databuse,” which it defined as: the malicious, reckless, negligent, or unjustified handling, collection, or use of a person’s data in a fashion adverse to that person’s interests and in the absence of that person’s knowing consent. Databuse can occur in corporate, government, or individual handling of data. Our expectations against it are an assertion of a negative right, not a positive one. It is in some respects closer to the non-self-incrimination value of the Fifth Amendment than to the privacy value of the Fourth Amendment. It asks not to be left alone, only that we not be forced to be the agents of our own injury when we entrust our data to others. We are asking not necessarily that our data remain private; we are asking, rather, that they not be used as a sword against us without good reason.

In the pages that follow, we attempt to apply this idea to a broad public policy problem, one with which government, industry, consumers and the privacy advocacy world have long grappled: that of defining the data protection obligations of for-profit companies, with respect to the handling of individual data, when they receive that data in the course of providing services to consumers without financial charge. In other words, we attempt to sketch out the duties that businesses like Google and Facebook owe to their users—though without drawing on any broad-brush concepts of privacy. Rather, we attempt to identify, amid the enormous range of values and proposed protections that people often stuff into privacy’s capacious shell, a core of user protections that actually represent something like a consensus.

This core interestingly lacks a name in the English language. But the values and duties that make it up describe a relationship best seen as a form of trusteeship. A user’s entrusting his or her personal data to a company in exchange for a service, we shall argue, conveys certain obligations to the corporate custodians of that person’s data: obligations to keep it secure, obligations to be candid and straightforward with users about how their data is being exploited, obligations not to materially misrepresent their uses of user data, and obligations not to use them in fashions injurious to or materially adverse to the users’ interests without their explicit consent. These obligations show up in nearly all privacy codes, in patterns of government enforcement, and in the privacy policies of the largest internet companies. It is failures of this sort of data trusteeship that we define as databuse. And we argue that protection against databuse—and not broader protections of more expansive, aspirational visions of privacy—should lie at the core of the relationship between individuals and the companies to whom they
give data in exchange for services.

The first-party, data-in-trade-for-services relationship is not the only one that matters in the Big Data world. We specifically here put aside the question of how to understand the obligations of so-called “data brokers”—companies having no direct relationship to the people whose data they collect and sell. The many vexing policy questions that arise from data brokers are different in character from those implicating the first-party relationship, and we leave them for another day.

As we explain in the paper’s third section, this approach promises a narrower conception of privacy—narrower, at any rate, than the one the Federal Trade Commission has sometimes used, and that European regulators have used frequently. However, it better explains much of the Commission’s enforcement activities and the White House’s legislative proposals than does the broader concept of privacy. There’s a reason for this, we suspect, and it’s not just that the relevant law does not countenance enforcement in situations in which—however anxious privacy advocates may be on their behalf—consumers face no tangible harm. It’s also that while a broad societal consensus exists about the need to safeguard consumers against deceptive corporate behavior and corporate behavior that causes consumer harm, no similar consensus exists that consumers require protection from voluntary exchanges of personal data in return for services, services that trade privacy for other goods. Indeed, judging by consumer behavior, a consensus has all but developed in the opposite direction: that we routinely regard such trades as promising tangible benefits in exchange for reasonable and manageable risks in a data-dependent society. That view informs the paper’s fourth and final section, in which we offer some general comments regarding future consumer protection law.

Tags:

Comments are closed.