While it’s fine and good to leave a comment where neither you nor an anonymous blogger know one another, what happens when you do know the anonymous blogger and it’s clear that they want to remain anonymous? This post tries to engage with this question, and focuses on the challenges that I experience when I want to post on an ‘anonymous’ blog where I know who is doing the blogging – it attends to the contextual privacy questions that race through my head before I post. As part of this, I want to think through how a set of norms might be established to address my own questions/worries, and means of communicating this with visitors.
I’ve been blogging in various forms for a long time now – about a decade (!) – and in every blog I’ve ever had I use my name. This has been done, in part, because when I write under my name I’m far more accountable than when I write under an alias (or, at least I think this is the case). This said, I recognize that my stance to is slightly different than that of many bloggers out there – many avoid closely associating their published content with their names, and often for exceedingly good reasons. Sometimes a blogger wants to just vent, and doesn’t want to deal with related social challenges that arise as people know that Tommy is angry. Others do so for personal safety reasons (angry/dangerous ex-spouses), some for career reasons (not permitted to blog/worried about effects of blogging for future job prospects), some to avoid ‘-ist’ related comments (sexist, racist, ageist, etc.).
All sorts of nasty things as said about ISPs that use Deep Packet Inspection (DPI). ISPs aren’t investing enough in their networks, they just want to punish early adopters of new technologies, they’re looking to deepen their regulatory powers capacities, or they want to track what their customers do online. ISPs, in turn, tend to insist that P2P applications are causing undue network congestion, and DPI is the only measure presently available to them to alleviate such congestion.
At the moment, the constant focus on P2P over the past few years has resulted in various ‘solutions’ including the development of P4P and the shift to UDP. Unfortunately, the cat and mouse game between groups representing record labels, ISPs (to a limited extent), and end-users has led to conflict that has ensured that most of the time and money is being put into ‘offensive’ and ‘defensive’ technologies and tactics online rather than more extensively into bandwidth-limiting technologies. Offensive technologies include those that enable mass analysis of data- and protocol-types to try and stop or delay particular modes of data sharing. While DPI can be factored into this set of technologies, a multitude of network technologies can just as easily fit into this category. ‘Defensive’ technologies include port randomizers, superior encryption and anonymity techniques, and other techniques that are primarily designed to evade particular analyses of network activity.
I should state up front that I don’t want to make myself out to be a technological determinist; neither ‘offensive’ or ‘defensive’ technologies are in a necessary causal relationship with one another. Many of the ‘offensive’ technologies could have been developed in light of increasingly nuanced viral attacks and spam barrages, to say nothing of the heightening complexity of intrusion attacks and pressures from the copyright lobbies. Similarly, encryption and anonymity technologies would have continued to develop, given that in many nations it is impossible to trust local ISPs or governments.
[Note: this is an early draft of the second section of a paper I’m working on titled ‘Who Gives a Tweet about Privacy’ and builds from an earlier posted section titled ‘Privacy, Dignity, Copyright and Twitter‘ Other sections will follow as I draft them.]
Towards a Statutory Notion of Privacy
Whereas Warren and Brandeis explicitly built a tort claim to privacy (and can be read as implicitly laying the groundwork for a right to privacy), theorists such as Alan Westin attempt to justify a claim to privacy that would operate as the bedrock for a right to privacy. Spiros Simitis recognizes this claim, but argues that privacy should be read as both an individual and a social issue. The question that arises is whether or not these writers’ respective understandings of privacy capture the normative expectations of speaking in a public space, such as Twitter; do their understandings of intrusion/data capture recognize the complexities of speaking in public spaces and provide a reasonable expectation of privacy that reflects people’s interests to keep private some, but not all, of the discussions they have in public?
The above image was taken by a Google Streetcar. As is evident, all of the faces in the picture have been blurred in accordance with Google’s anonymization policy. I think that the image nicely works as a lightning rod to capture some of the criticisms and questions that have been arisen around Streetview:
Does the Streetview image-taking process itself, generally, constitute a privacy violation of some sort?
Are individuals’ privacy secured by just blurring faces?
Is this woman’s privacy being violated/infringed upon in so way as a result of having her photo taken?
Google’s response is, no doubt, that individuals who feel that an image is inappropriate can contact the company and they will take the image offline. The problem is that this puts the onus on individuals, though we might be willing to affirm that Google recognizes photographic privacy as a social value, insofar as any member of society who sees this as a privacy infringement/violation can also ask Google to remove the image. Still, even in the latter case this ‘outsources’ privacy to the community and is a reactive, rather than a proactive, way to limit privacy invasions (if, in fact, the image above constitutes an ‘invasion’). Regardless of whether we want to see privacy as an individual or social value (or, better, as valuable both for individuals and society) we can perhaps more simply ponder whether blurring the face alone is enough to secure individuals’ privacy. Is anonymization the same as securing privacy?
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).