Immanuel Kant’s essay “On the Common Saying: ‘This May be True in Theory, but it does not Apply in Practice'” argues that theory is central to understanding the world around us and that, moreover, attempts to say that ‘theory doesn’t apply to the world as such’ are generally misguided. Part of the reason that Kant can so firmly advocate that theory and reality are co-original emerge from his monological rationalism, but at the same time time we see him argue that the clearest way to bring theory and practice into alignment is with more theory – rather than adopting ‘parsimonious’ explanations of the world we would be better off to develop rigorous and detailed accounts of the world.
Parsimony seems to be a popular term in the social sciences; it lets researchers develop concise theories that can be applied to particular situations, lets them isolate and speak about particular variables, and lends itself to broad(er) public accessibility of the theory in question. At the same time, theorists critique many such parsimonious accounts because they commonly fail to offer full explanations of social phenomena!
The complexity of privacy issues in combination with a desire for parsimony has been a confounding issue for privacy theorists. Nailing down what ‘privacy’ actually refers to has been, and continues to be, a nightmarish task insofar as almost every definition has some limiting factor. This problem is (to my mind) compounded when you enter online, or digital, environments where developing a complete understanding of how data flows across systems, what technical languages’ demands underlie data processing systems, and developing a comprehensive account of confidentiality and trust, are all incredibly challenging and yet essential for theorization. This is especially true when we think of a packet as being like post card (potentially one with its content encrypted) – in theory anyone could be capturing and analyzing packet streams and data that is held on foreign servers.
Bruce Schneider captures common thinking of privacy in online environments quite nicely in a recent posting, where he wrote:
If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization — domestic and international (Source).
This is a pragmatic understanding of how data is actually processed, but this understanding does not necessarily correlate with how we expect privacy norms should be carried over into digital environments. In essence, when we send an e-mail there is an expectation that it will be as secure as a letter that is sent using the postal system. Of course, anyone who follows American law knows that as soon as something goes digital that a new set of laws are often applied to the e-version, laws that are often out of sync with ‘analogue’ law and its associated socio-political norms. The stance that we should have different expectations of privacy based on the shift from analogue to digital is not necessarily the stance that we want to assume, and really leads us to ask questions about what, where, why, and how we generate expectations of privacy. Do we understand privacy on the basis of individualistic expectations? Social expectations? Perhaps it is based on the relationship of the individual to the situation and fellow participating actors?
There are many individuals and groups that are trying to address questions of how we can secure privacy, and in the process of ‘securing’ privacy they espouse particular understandings of what privacy ‘is’. Ann Cavoukian and Lawrence Lessig appear to share a similar position in terms of privacy – they maintain that by inserting particular values into emerging technologies at the level of code itself that the citizenry and its government can radically shift or preserve dominant social norms. In Ann’s case she has recently talked about Privacy Enhanced Technologies Plus (PET+); such technologies have privacy ‘baked in’ so that new and efficient technologies can simultaneously shield individuals’ personal data from inappropriate uses. In Lessig’s case, he argues in Code: Version 2.0 that we need to use legislative power to establish the socio-political normative expectations that technologies ought to conform to. After creating ‘legislative code’ technologists are then responsible for actually implementing these expectations. In both cases there are underlying expectations and stances towards privacy; the most significant (as I read it) is that they assert that individuals have privacy and that steps must be taken to avoid unnecessarily stripping it from them.
One might ask then: where there is ‘no ability’ to build privacy into the architecture of a particular technological environment does that mean that we should abandon that space to the the ‘public’? Should this position be taken regardless of whether or not publicization would be is totally out of alignment with social values? To put it bluntly: should we sacrifice online online actions (e.g. chat, email, web browsing, file transfers) to the ‘public’ and thus give give up our contemporary normative expectations in favor of a technologist dystopia?
In the 90s there was excitement that encryption would be our saviour – privacy through security was possible! Pragmatically, encryption hasn’t been widely deployed beyond commercial and enterprise environments. Theoretically, we might wonder what was meant by securing ‘privacy’ through encryption – given that encrypted traffic is of immediate interest to national security agencies, were we talking about securing communicative privacy at the expense of locational and group anonymity?
The challenge facing us is to conceptualize a privacy model applies to digital environments and that also holds policy-level applications. To date, we have data protection laws, but depending on your jurisdiction you might wonder just how effective they actually are. Further, given the shift towards cloud computing, actually producing a data map of where your data is, and what associated laws thus govern whether or not it can be disclosed to authorities is problematized. Once we talk about data, we also fall into discussions of copyright, and with copyright to conversations about free speech and censorship. Privacy’s tendrils extend throughout society, which means that any theory of privacy must at the very least recognize its extensivity, and preferably actually have something to say about where the tendrils extend to.
Conceptually it may be helpful to turn to legal statutes as well as socio-political norms to find what we term as privacy problems, and from there we can think through why these are problems. Having performed these two steps, we can then proceed to develop a rich theoretical understanding of what we mean by ‘privacy’. Definitionally, this term will like be multifaceted, possessing overlying elements that sometimes only apply in part to an issue that is at hand. Because of its multifaceted nature, and because to recognize these facets we may need to consider problems from varying normative angles, there is a need to investigate processes of norm formation and, in the process, we witness the ground(s) of privacy extend as we demand a wider and wider accounting of privacy’s constituting elements.
In light of the sheer breadth of work entailed in constructing a theory of privacy, one might be tempted to ‘just’ develop parsimonious accounts of what is private in a particular moment in time and space. This has the advantage of stating that a foundation may be relatively arbitrary, or insufficiently theorized, but still useful for a task or issue at hand. A difficulty with the aforementioned process I put forward (problems –> understanding problems –> defining privacy) is that it leaves open the question of what grounds problem identification – one’s gut instinct, a theoretical apparatus, or something else entirely?
While I lack a response to this last question, my thinking is that we should take a page from Kant’ book and genuinely inquire whether or not a parsimonious understanding of ‘privacy’ is actually what we want – do we want to focus on the pragmatic ‘now’ – and ask if we should instead pursue nuanced and detailed accounts of privacy that are fluid enough to accommodate changes in normative attitudes and technological innovations. Such an approach wouldn’t necessarily discount current pragmatic approaches to or understandings of privacy-related problems, but could innovate well beyond the limited conceptualizations lying behind some of the current pragmatic approaches.