Immanuel Kant’s essay “On the Common Saying: ‘This May be True in Theory, but it does not Apply in Practice'” argues that theory is central to understanding the world around us and that, moreover, attempts to say that ‘theory doesn’t apply to the world as such’ are generally misguided. Part of the reason that Kant can so firmly advocate that theory and reality are co-original emerge from his monological rationalism, but at the same time time we see him argue that the clearest way to bring theory and practice into alignment is with more theory – rather than adopting ‘parsimonious’ explanations of the world we would be better off to develop rigorous and detailed accounts of the world.
Parsimony seems to be a popular term in the social sciences; it lets researchers develop concise theories that can be applied to particular situations, lets them isolate and speak about particular variables, and lends itself to broad(er) public accessibility of the theory in question. At the same time, theorists critique many such parsimonious accounts because they commonly fail to offer full explanations of social phenomena!
The complexity of privacy issues in combination with a desire for parsimony has been a confounding issue for privacy theorists. Nailing down what ‘privacy’ actually refers to has been, and continues to be, a nightmarish task insofar as almost every definition has some limiting factor. This problem is (to my mind) compounded when you enter online, or digital, environments where developing a complete understanding of how data flows across systems, what technical languages’ demands underlie data processing systems, and developing a comprehensive account of confidentiality and trust, are all incredibly challenging and yet essential for theorization. This is especially true when we think of a packet as being like post card (potentially one with its content encrypted) – in theory anyone could be capturing and analyzing packet streams and data that is held on foreign servers.
This is a full draft of the paper on Twitter and privacy that I’ve been developing over the past few weeks, entitled ‘Who Gives a ‘Tweet’ About Privacy?’ It uses academic privacy literature to examine Twitter and the notion of reasonable expectations of privacy in public, and is written to help nuance privacy discussions surrounding the discourse occuring on Twitter (and, implicitly, similar social networking and blogging sites). The paper focuses on concepts of privacy and, as such, avoids deep empirical analyses of how the term ‘privacy’ is used by particular members of the social networking environment. Further, the paper avoids delving into the web of legal cases that could be drawn on to inform this discussion. Instead, it is theoretically oriented around the following questions:
Do Twitter’s users have reasonable expectations to privacy when tweeting, even though these tweets are the rough equivalent of making statements in public?
If Twitter’s user base should hold expectations to privacy, what might condition these expectations?
The paper ultimately suggests that Daniel Solove’s taxonomy of privacy, most recently articulated in Understanding Privacy, offers the best framework to respond to these question. Users of Twitter do have reasonable expectations to privacy, but such expectations are conditioned by juridical understandings of what is and is not reasonable. In light of this, I conclude by noting that Solove’s use of law to recognize norms is contestable. Thus, while privacy theorists may adopt his method (a focus on privacy problems to categorize types of privacy infractions), they might profitably condition how and why privacy norms are established – court rulings and dissenting opinions may not be the best foundation upon which to rest our privacy claims – by turning to non-legal understandings of norm development, degeneration, and mutation.
I think about peer to peer (P2P) filesharing on a reasonably regular basis, for a variety of reasons (digital surveillance, copyright analysis and infringement, legal cases, value in efficiently mobilizing data, etc.). Something that always nags at me is the defense that P2P websites offer when they are sued by groups like the Recording Industry Association of America (RIAA). The defense goes something like this:
“We, the torrent website, are just an search engine. We don’t actually host the infringing files, we are just responsible for directing people to them. We’re no more guilty of copyright infringement than Google, Yahoo!, or Microsoft are.”
Let’s set aside the fact that Google has been sued for infringing on copyright on the basis that it scrapes information from other websites, and instead turn our attention to the difference between what are termed ‘public’ and ‘private’ trackers. ‘Public’ trackers are available to anyone with a web connection and a torrent program. These sites do not require users to upload a certain amount of data to access the website – they are public, insofar as there are few/no requirements placed on users to access the torrent search engine and associated index. Registration is rarely required. Good examples at thepiratebay.org, and mininova.org. ‘Private’ trackers require users to sign up and log into the website before they can access the search engine and associated index of .torrent files. Moreover, private trackers usually require users to maintain a particular sharing ration – they must upload a certain amount of data that equals or exceeds the amount of data that they download. Failure to maintain the correct share ratio results in users being kicked off the site – they can no longer log into it and access the engine and index.
The OpenNet Initiative’s (ONI) mission is to “identify and document Internet filtering and surveillance, and to promote and inform wider public dialogs about such practices.” Access Denied: The Practice and Policy of Global Internet Filtering is one of their texts that effectively draws together years of their research, and presents it in an accessible and useful manner for researchers, activists, and individuals who are simply interested in how the Internet is shaped by state governments.
The text is separated into two broad parts – the first is a series of essays that situate the data that has been collected into a quickly accessible framework. The authors of each essay manage to retain a reasonable level of technical acumen, even when presenting their findings and the techniques of filtering to a presumably non-technical audience. It should be noted that the data collected includes up to 2007 – if you’re reading the text in the hopes that the authors are going to directly address filtering technologies that have recently been in the new, such as Deep Packet Inspection, you’re going to be a disappointed (though they do allude to Deep Packet technologies, without explicitly focusing on it, in a few areas). Throughout the text there are references to human rights and, while I’m personally a proponent of them, I wish that the authors had endeavored to lay out some more of the complexities of human rights discourse – while they don’t present these rights as unproblematic, I felt that more depth would have been rewarding both for their analysis, and for the benefit of the reader. This having been said, I can’t begrudge the authors of the essays for drawing on human rights at various points in their respective pieces – doing so fits perfectly within ONI’s mandate, and their arguments surrounding the use of human rights are sound. Continue reading
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).