The Western world is pervaded by digital information, to the point where we might argue that most Western citizens operate in a bio-digital field that is constituted by the conditions of life and life’s (now intrinsic) relationships to digital code. While historically (if 30 years or so can withstand the definitional intonations of ‘historically) such notions of code would dominantly pertain to government databanks and massive corporate uses of code and data, with the advent of the ‘social web’ and ease of mashups we are forced to engage with questions of how information, code, and privacy norms and regulations pertain to individual’s usage of data sources. While in some instances we see penalties being handed down to individuals that publicly release sensitive information (such as Sweden’s Bodil Lindqvist, who was fined for posting personal data about fellow church parishioners without consent), what is the penalty when public information is situated outside of its original format and mashed-up with other data sources? What happens when we correlate data to ‘map’ it?
Let’s get into some ‘concrete’ examples to engage with this matter. First, I want to point to geo-locating trace route data, the information that identifies the origin of website visitors’ data traffic, to start thinking about mashups and privacy infringements. Second, I’ll briefly point to some of the challenges arising with the meta-coding of the world using Augmented Reality (AR) technologies. The overall aim is not to ‘resolve’ any privacy questions, but to try and reflect on differences between ‘specificity’ of geolocation technology, the implications of specificity, and potential need to establish a new set of privacy norms given the bio-digital fields that we find ourself immersed in.
In a conversation with Prof. Andrew Clement this summer we got talking about the ever-increasing deployment of CCTV cameras throughout Canada. The conversation was, at least in part, motivated by the massive number of cameras that are being deployed throughout Vancouver with the leadup to the 2010 Olympic games; these cameras were one of the key focuses of the 10th Annual Security and Privacy Conference, where the BC Privacy Commissioner said that he might resign if the surveillance infrastructure is not taken down following the games.
I don’t want to delve into what, in particular, Prof. Clement is thinking of doing surrounding CCTV given that I don’t think he’s publicly announced his intentions. What I will do, however, is outline my own two-pronged approach to rendering CCTV a little more transparent. At the onset, I’ll note that:
My method will rely on technology (augmented reality) that is presently only in the hands of a small minority of the population;
My method is meant to be more and more useful as the years continue (and as the technology becomes increasingly accessible to consumers).
The broad goal is the following: develop a set of norms and processes to categorize different CCTV installations. Having accomplished this task, a framework would be developed for an augmented reality program (here’s a great blog on AR) that could ‘label’ where CCTV installations are and ‘grade’ them based on the already established norms and processes.
I’ve been reading some work on privacy and social networks recently, and this combined with Ratliff’s “Gone Forever: What Does It Really Take to Disappear” has led me to think about whether a geek with a website that is clearly their own (e.g. Christopher-Parsons.com) should reasonably expect restraining laws to extend to digital spaces. I’m not really talking at the level of law necessarily, but at a level of normativity: ought a restraining order limit a person from ‘following’ me online as it does from being near me in the physical world?
Restraining orders are commonly issued to prevent recurrences of abuse (physical or verbal) and stalking. While most people who have a website are unable to track who is visiting their webspace, what happens when you compulsively check your server logs (as many good geeks do) and can roughly correlate traffic to particular geo-locations. As a loose example, let’s say that you were in a small town, ‘gained’ an estranged spouse, and then notice that there are regular hits to your website from that small town after you’ve been away from it for years. Let’s go further and say that you have few/no friends in that town, and that you do have a restraining order that is meant to prevent your ex-spouse from being anywhere near you. Does surfing to your online presence (we’ll assume, for this posting, that they aren’t commenting or engaging with the site) normatively constitute a breach of an order?
While it’s fine and good to leave a comment where neither you nor an anonymous blogger know one another, what happens when you do know the anonymous blogger and it’s clear that they want to remain anonymous? This post tries to engage with this question, and focuses on the challenges that I experience when I want to post on an ‘anonymous’ blog where I know who is doing the blogging – it attends to the contextual privacy questions that race through my head before I post. As part of this, I want to think through how a set of norms might be established to address my own questions/worries, and means of communicating this with visitors.
I’ve been blogging in various forms for a long time now – about a decade (!) – and in every blog I’ve ever had I use my name. This has been done, in part, because when I write under my name I’m far more accountable than when I write under an alias (or, at least I think this is the case). This said, I recognize that my stance to is slightly different than that of many bloggers out there – many avoid closely associating their published content with their names, and often for exceedingly good reasons. Sometimes a blogger wants to just vent, and doesn’t want to deal with related social challenges that arise as people know that Tommy is angry. Others do so for personal safety reasons (angry/dangerous ex-spouses), some for career reasons (not permitted to blog/worried about effects of blogging for future job prospects), some to avoid ‘-ist’ related comments (sexist, racist, ageist, etc.).
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).