As We Now Think

Reflections, commentary and analysis from Consortium for Science, Policy and Outcomes at Arizona State University.

The Right to Be Forgotten?

By Michael Burnam-Fink

The recent revelations of a set of massive and longstanding NSA surveillance programs have prompted blizzard of accusations, defenses, and recriminations from across the political spectrum, PRISM and related programs have been called everything from “all the infrastructure a tyrant could need” to a vital component of national security, while leaker Edward Snowden has been called everything from a hero to a traitor. A week later, the pieces have yet to settle. But what’s been bothering me in all this is the confusion of ideas about state power, civil liberties, surveillance, computer security, and privacy. It bothers me, because without clear ideas we cannot have clear policies, and without clear policies liberty, security, and any other desired public good are achieved by accident.

PRISM is not strictly speaking surveillance.  It looks like surveillance, it feels like surveillance, but it lacks the main purpose of surveillance: creating a disciplinary power relationship.  When scholars talk about surveillance in a rigorous sense, they’re mostly talking about Foucault’s theory of the Panopticon. The original Panopticon was a plan for a prison, with the cells arranged so that a single guard could watch all the prisoners, and dispatch punishments and rewards as appropriate to the prisoner’s behavior.  Eventually, according to Foucault, the desires of the warden would be internalized by the prisoners, and they would behave as planned. They would be disciplined.

Foucault’s genius was noting that the architecture of the Panopticon allowed the mechanisms of power to operate at very little cost, because prisoners could not tell when they were under observation, and so would always have to behave as if they were being watched. Additionally, panopticonic structures were everywhere: in classrooms, hospitals, urban renewal of medieval districts into broad boulevards, even the bureaucratic organization of the modern state into administrative districts and statistical agencies, to the point that a scholar describing yet another panopticon is met with a sigh and shrug.

As Whitney Boesel of Cyborgology noted, when we look for a disciplinary purpose in these NSA programs, we find nothing. Despite editorials in the New York Times and on ThinkProgress with a Foucault 101 explanation of the panoptiocn, and gestures towards the chilling effect and future potential harms, it’s difficult to point to any specific thought or speech act that someone did not have as a consequence of the potential that they might be added to an NSA database. People appear to be totally free to say and think whatever they want online, including espousing flatly anti-democratic opinions across the political spectrum. These programs are no more panopticonic than 20th century statecraft in general.

Privacy has a totemic value in American political discourse, but privacy as a concept is fuzzy at best. Philosophically, my colleague Jathan Sadowski describes privacy as “that which allows authentic personal growth,” a kind of antithesis to the disciplining and shaping of the panopticon. Legally, American privacy originates in a penumbra of rights defined in the 4th Amendment (protection from arbitrary search and seizure), 9th Amendment (other, unspecified rights), and 14th Amendment (right to due process). Privacy has further become established as part of the justification for reproductive freedom in Griswold v Connecticut and Roe v Wade, loading it with all the baggage of the culture war.

But what is privacy, really? Future Supreme Court Justice Louis Brandeis, in an influential 1890 essay, described it as “the right to be left alone.” Brandeis’s essay was published in the context of an intrusive popular press using the then-new technology of instant photography to violate the privacy of New York society members. Brandeis extended the basic right of a person to limit the expression of their “thoughts, sentiments, and emotions” into a fundamental divide of public and private spaces. Since then, mainstream legal thought has attempted to apply Brandeis’s theory of privacy to new technology and new concerns, with varying degrees of success.

Brandeis metaphor breaks down in the face of Big Data because Brandeis was concerned with the gradations of privacy in space (it is acceptable to be photographed on the red carpet at a premier, unacceptable on your doorstep, totally illegal inside your home), and computers and data are profoundly non-spatial. There is no “Cyberspace.” That’s an idea cribbed from a science-fiction book written by a man who’d never seen a computer. Spatial metaphors fundamentally fail to capture what computers are doing. Computers are, mathematically speaking, devices that turn numbers into other numbers according to certain rules. These days, we use computers for lots of things: science, entertainment, but mostly accounting and communication. And for the latter two uses, the phrase “my personal data” (which inspires so much angst) confuses personal to mean both “about a person” and “belonging to a person.”

Advocates of strict privacy control tend to confuse the two. Privacy is contextual, social, and promethean, so I’d like to analyze something concrete instead: secrets.  A secret is something that a person or a small group knows, which they do not want other people to know. Most “personal data” is actually part of a transaction, whether you’re buying a stick of gum at the gas station or looking at pictures stored on a remote server. We’re free to keep records of your side of the transaction, yet we’re outraged when the other side keeps records as well. We could ask the other side of the transaction to delete the records, or not share them, but at its strongest this is a normative approach. There’s no force behind it.

Moving from the normative ‘ought’ to ‘is’ requires a technological fix. Physical privacy is important, but walls and screens are far more sure than the averted gaze. It’s wrong to steal, and valuable things are locked up.  The digital equivalent to walls and locks is cryptography—math that makes it difficult to access a file or a computer system. Modern crypto is technically speaking, very very good. RSA-256 is unbreakable in the lifespan of the universe, assuming it’s correctly used. The problem with cryptography is that it’s very rarely used according the directions. People use and reuse weak passwords, they leave themselves logged in on laptops which get left in taxis, or they plug in corrupted USB keys, compromising entire networks.

There is a very real chance that there is no such thing as digital privacy or security; that Stewart Brand’s slogan that “information wants to be free” is true in the same way that “nature abhors a vacuum.” The basic architectures of computers, x86 and TCP/IP, are decades old and inherently insecure in the way that they execute code. Cloud services are even worse. We as users don’t own those servers, we don’t even rent them. We borrow them. Google and Facebook aren’t letting us use their services out of the goodness of their hearts, and that data that we enter (personal data in both senses) is the source of their market power. Sure, there are cryptographically secure alternatives (DuckDuckGo, Hushmail and Diaspora come immediately to mind), but their features are lacking and relatively few people use them. Crypto is both hard, and runs directly against the business model of major internet companies.  The best way to keep a secret is not to tell anybody, and if you have a real secret, I’d strongly advise you to never tell a computer.

Practically, not even the director of the CIA follows that advice. Unless you’re Amish, you have to tell computers things all the time, which leads to the problem of what the government can do and should not do with all that data. I personally don’t like the “if you’ve done nothing wrong, you have nothing to fear” arguments advanced by advocates of the security state, because the historical record shows plenty of good reasons to distrust American intelligence agencies, ranging from mere incompetence (missing the imminent collapse of the Berlin Wall, Iraq’s non-existent WMDs, the Arab Spring) to outright criminality (CIA backed assassinations in the 60s and 70s, COINTELRPO, Iran-Contra), but this wasn’t some kind of rogue operation: data was collected according to the PATRIOT act, overseen by FISA judges, and Congressional was informed. Certainly it was according to the letter of the law rather than the spirit, but it happened within the democratically elected, bipartisan mechanisms of government just the same. It’s hard to deny that many voters were willing to make that trade of liberty against security.

The dream of counter-terror experts everywhere is some kind of perfect prediction machine, some kind of device which could sift through masses of data and isolate the unique signature of a terrorist plot before it materializes. This is a fantasy. Signals intelligence and social network analysis is immensely useful for mapping a known entity and determining its intentions, but picking ‘lone wolves’ out of a mass of civilians is a different beast entirely. Likewise, data mining can do great work on large and detailed datasets, but since 2001 there have been only a handful of terrorist attacks in America and Europe (local insurgencies have very different objectives and behaviors). There is no signature of a immanent terrorist attack. Realistically, what these systems can do is very rapidly and precisely reconstruct the past, making the history of an event legible to determine the extent of an attack and hunt down co-conspirators.

What’s happening isn’t really surveillance; the millions of people buying 1984 are reading the wrong part of the book. Orwell’s Party is terrifying not because of the the torture chambers in the Ministry of Love, but because it can say “We have always been at war with Eurasia, and the chocolate ration has been increased to 2 oz” (when last month it was 3 oz), and what The Party says is true. Rewriting history is dangerous for nations, but as Daniel Solove has eloquently pointed out, for individuals the proper literary comparison isn’t 1984, its Kafka’s The Trial, where the protagonist is bounced powerlessly and senselessly through an immense bureaucracy.

The political problems of these programs and Big Data are not the same as the problems of secret prisons, torture chambers, and non-judicial executions, though all those things are very real and very dangerous to civil liberties. The more common assaults are the unnecessary audit, the line at the airport, the job application rejected because of a bad credit score, and the utter lack of recourse that we as citizens have against these abuses by many large scale organization, including corporations and governments.

We could mandate laws to force basic changes in how computers work and how data is collected, such as deleting everything as soon as it comes in or packing databases with random chaff.  Purely legal solutions to technological problems are almost never effective, and usually add another layer of complexity to the existing mess. As any security expert will tell you, security through obscurity is no security at all, and anonymous data is far from anonymous. Giving up and living as suspects under glass, fugitives in our own lives, is equally unappealing.

The alternative is recognizing that in a world of omnipresent computation, leaving traces behind is inevitable, but that rather than the uncertain shield of privacy, we can wield a sword of truth. To ask for privacy is to ask to be forgotten, something both impossible and generally undesirable. We should have the right to set the record straight, to demand to know what is known about us as individuals and as a population, and to appeal what are currently non-judicial and unaccountable actions. Brandeis’s right to be left alone is not the right to disappear, but rather to demand that those who would try and harass us reveal themselves and defend their actions.

Michael Burnam-Fink is a PhD student in the Human and Social Dimensions of Science and Technology at Arizona State University. Follow him on Twitter at @mburnamfink.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


This entry was posted on June 17, 2013 by in Technology Policy and tagged , , , .
%d bloggers like this: