Immanuel Kant’s essay “On the Common Saying: ‘This May be True in Theory, but it does not Apply in Practice'” argues that theory is central to understanding the world around us and that, moreover, attempts to say that ‘theory doesn’t apply to the world as such’ are generally misguided. Part of the reason that Kant can so firmly advocate that theory and reality are co-original emerge from his monological rationalism, but at the same time time we see him argue that the clearest way to bring theory and practice into alignment is with more theory – rather than adopting ‘parsimonious’ explanations of the world we would be better off to develop rigorous and detailed accounts of the world.
Parsimony seems to be a popular term in the social sciences; it lets researchers develop concise theories that can be applied to particular situations, lets them isolate and speak about particular variables, and lends itself to broad(er) public accessibility of the theory in question. At the same time, theorists critique many such parsimonious accounts because they commonly fail to offer full explanations of social phenomena!
The complexity of privacy issues in combination with a desire for parsimony has been a confounding issue for privacy theorists. Nailing down what ‘privacy’ actually refers to has been, and continues to be, a nightmarish task insofar as almost every definition has some limiting factor. This problem is (to my mind) compounded when you enter online, or digital, environments where developing a complete understanding of how data flows across systems, what technical languages’ demands underlie data processing systems, and developing a comprehensive account of confidentiality and trust, are all incredibly challenging and yet essential for theorization. This is especially true when we think of a packet as being like post card (potentially one with its content encrypted) – in theory anyone could be capturing and analyzing packet streams and data that is held on foreign servers.
Colin Bennett, in his recent text The Privacy Advocates: Resisting the Spread of Surveillance, does a nice job creating a developing a typography for privacy advocates. Of a minor controversy, his text doesn’t include data protection commissioners as ‘privacy advocates’, even if they self-identify as such, on the basis that he wants to reflect on the roles that actors from civil society now play. Privacy, when understood in terms of regulatory capacity and relevant actors, cannot be sensibly talked about just in terms of ‘official’ advocates (e.g. data commissioners) because civil society is often deeply involved in the actions, reactions, and positions that the commissioners are forced to assume. In essence, privacy advocates are sometimes friends of, foes of, or ambivalent towards the privacy commissioners (I’d use another typography for this relationship, but I’ll wait for it to be publicly presented before talking about it here. It’s really snazzy though.).
Privacy advocates, in Bennett’s terms, are classified as such:
This is a full draft of the paper on Twitter and privacy that I’ve been developing over the past few weeks, entitled ‘Who Gives a ‘Tweet’ About Privacy?’ It uses academic privacy literature to examine Twitter and the notion of reasonable expectations of privacy in public, and is written to help nuance privacy discussions surrounding the discourse occuring on Twitter (and, implicitly, similar social networking and blogging sites). The paper focuses on concepts of privacy and, as such, avoids deep empirical analyses of how the term ‘privacy’ is used by particular members of the social networking environment. Further, the paper avoids delving into the web of legal cases that could be drawn on to inform this discussion. Instead, it is theoretically oriented around the following questions:
- Do Twitter’s users have reasonable expectations to privacy when tweeting, even though these tweets are the rough equivalent of making statements in public?
- If Twitter’s user base should hold expectations to privacy, what might condition these expectations?
The paper ultimately suggests that Daniel Solove’s taxonomy of privacy, most recently articulated in Understanding Privacy, offers the best framework to respond to these question. Users of Twitter do have reasonable expectations to privacy, but such expectations are conditioned by juridical understandings of what is and is not reasonable. In light of this, I conclude by noting that Solove’s use of law to recognize norms is contestable. Thus, while privacy theorists may adopt his method (a focus on privacy problems to categorize types of privacy infractions), they might profitably condition how and why privacy norms are established – court rulings and dissenting opinions may not be the best foundation upon which to rest our privacy claims – by turning to non-legal understandings of norm development, degeneration, and mutation.
Paper can be downloaded here.
I think about peer to peer (P2P) filesharing on a reasonably regular basis, for a variety of reasons (digital surveillance, copyright analysis and infringement, legal cases, value in efficiently mobilizing data, etc.). Something that always nags at me is the defense that P2P websites offer when they are sued by groups like the Recording Industry Association of America (RIAA). The defense goes something like this:
“We, the torrent website, are just an search engine. We don’t actually host the infringing files, we are just responsible for directing people to them. We’re no more guilty of copyright infringement than Google, Yahoo!, or Microsoft are.”
Let’s set aside the fact that Google has been sued for infringing on copyright on the basis that it scrapes information from other websites, and instead turn our attention to the difference between what are termed ‘public’ and ‘private’ trackers. ‘Public’ trackers are available to anyone with a web connection and a torrent program. These sites do not require users to upload a certain amount of data to access the website – they are public, insofar as there are few/no requirements placed on users to access the torrent search engine and associated index. Registration is rarely required. Good examples at thepiratebay.org, and mininova.org. ‘Private’ trackers require users to sign up and log into the website before they can access the search engine and associated index of .torrent files. Moreover, private trackers usually require users to maintain a particular sharing ration – they must upload a certain amount of data that equals or exceeds the amount of data that they download. Failure to maintain the correct share ratio results in users being kicked off the site – they can no longer log into it and access the engine and index.