Privacy Norms in the Bio-Digital World

pixelatedworldThe Western world is pervaded by digital information, to the point where we might argue that most Western citizens operate in a bio-digital field that is constituted by the conditions of life and life’s (now intrinsic) relationships to digital code. While historically (if 30 years or so can withstand the definitional intonations of ‘historically) such notions of code would dominantly pertain to government databanks and massive corporate uses of code and data, with the advent of the ‘social web’ and ease of mashups we are forced to engage with questions of how information, code, and privacy norms and regulations pertain to individual’s usage of data sources. While in some instances we see penalties being handed down to individuals that publicly release sensitive information (such as Sweden’s Bodil Lindqvist, who was fined for posting personal data about fellow church parishioners without consent), what is the penalty when public information is situated outside of its original format and mashed-up with other data sources? What happens when we correlate data to ‘map’ it?

Let’s get into some ‘concrete’ examples to engage with this matter. First, I want to point to geo-locating trace route data, the information that identifies the origin of website visitors’ data traffic, to start thinking about mashups and privacy infringements. Second, I’ll briefly point to some of the challenges arising with the meta-coding of the world using Augmented Reality (AR) technologies. The overall aim is not to ‘resolve’ any privacy questions, but to try and reflect on differences between ‘specificity’ of geolocation technology, the implications of specificity, and potential need to establish a new set of privacy norms given the bio-digital fields that we find ourself immersed in.

Continue reading

Rendering CCTV (Somewhat) More Transparent

CCTV meets consumerismIn a conversation with Prof. Andrew Clement this summer we got talking about the ever-increasing deployment of CCTV cameras throughout Canada. The conversation was, at least in part, motivated by the massive number of cameras that are being deployed throughout Vancouver with the leadup to the 2010 Olympic games; these cameras were one of the key focuses of the 10th Annual Security and Privacy Conference, where the BC Privacy Commissioner said that he might resign if the surveillance infrastructure is not taken down following the games.

I don’t want to delve into what, in particular, Prof. Clement is thinking of doing surrounding CCTV given that I don’t think he’s publicly announced his intentions. What I will do, however, is outline my own two-pronged approach to rendering CCTV a little more transparent. At the onset, I’ll note that:

  1. My method will rely on technology (augmented reality) that is presently only in the hands of a small minority of the population;
  2. My method is meant to be more and more useful as the years continue (and as the technology becomes increasingly accessible to consumers).

The broad goal is the following: develop a set of norms and processes to categorize different CCTV installations. Having accomplished this task, a framework would be developed for an augmented reality program (here’s a great blog on AR) that could ‘label’ where CCTV installations are and ‘grade’ them based on the already established norms and processes.

Continue reading

The Geek, Restraining Orders, and Theories of Privacy

restrainingorderI’ve been reading some work on privacy and social networks recently, and this combined with Ratliff’s “Gone Forever: What Does It Really Take to Disappear” has led me to think about whether a geek with a website that is clearly their own (e.g. Christopher-Parsons.com) should reasonably expect restraining laws to extend to digital spaces. I’m not really talking at the level of law necessarily, but at a level of normativity: ought a restraining order limit a person from ‘following’ me online as it does from being near me in the physical world?

Restraining orders are commonly issued to prevent recurrences of abuse (physical or verbal) and stalking. While most people who have a website are unable to track who is visiting their webspace, what happens when you compulsively check your server logs (as many good geeks do) and can roughly correlate traffic to particular geo-locations. As a loose example, let’s say that you were in a small town, ‘gained’ an estranged spouse, and then notice that there are regular hits to your website from that small town after you’ve been away from it for years. Let’s go further and say that you have few/no friends in that town, and that you do have a restraining order that is meant to prevent your ex-spouse from being anywhere near you. Does surfing to your online presence (we’ll assume, for this posting, that they aren’t commenting or engaging with the site) normatively constitute a breach of an order?

Continue reading

Context, Privacy, and (Attempted) Blogger Anonymity

bloggingtimelineWhile it’s fine and good to leave a comment where neither you nor an anonymous blogger know one another, what happens when you do know the anonymous blogger and it’s clear that they want to remain anonymous? This post tries to engage with this question, and focuses on the challenges that I experience when I want to post on an ‘anonymous’ blog where I know who is doing the blogging – it attends to the contextual privacy questions that race through my head before I post. As part  of this, I want to think through how a set of norms might be established to address my own questions/worries, and means of communicating this with visitors.

I’ve been blogging in various forms for a long time now – about a decade (!) – and in every blog I’ve ever had I use my name. This has been done, in part, because when I write under my name I’m far more accountable than when I write under an alias (or, at least I think this is the case). This said, I recognize that my stance to is slightly different than that of many bloggers out there – many avoid closely associating their published content with their names, and often for exceedingly good reasons. Sometimes a blogger wants to just vent, and doesn’t want to deal with related social challenges that arise as people know that Tommy is angry. Others do so for personal safety reasons (angry/dangerous ex-spouses), some for career reasons (not permitted to blog/worried about effects of blogging for future job prospects), some to avoid ‘-ist’ related comments (sexist, racist, ageist, etc.).

Continue reading

Deep Packet Inspection and the Discourses of Censorship and Regulation

boredomIn the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:

  1. They can continue to throttle ‘problem’ applications in the future;
  2. The CRTC decides to leave the wireless market alone right now.

I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.

Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type:

Continue reading

DPI and Canadians’ Reasonable Expectations of Privacy

canadasupremecourt[Note – I preface this with the following: I am not a lawyer, and what follows is a non-lawyer’s ruminations of how the Supreme Court’s thoughts on reasonable expectations to privacy intersect with what deep packet inspection (DPI) can potentially do. This is not meant to be a detailed examination of particular network appliances with particular characteristics, but much, much more general in nature.]

Whereas Kyllo v. United States saw the US Supreme Court assert  that thermal-imaging devices, when directed towards citizens’ homes, did constitute an invasion of citizens’ privacy, the corresponding Canadian case (R. v. Tessling) saw the Supreme Court assert that RCMP thermal imaging devices did not violate Canadians’ Section 8 Chart rights (“Everyone has the right to be secure against unreasonable search or seizure”). The Court’s conclusions emphasized information privacy interests at the expense of normative expectations – thermal information, on its own, was practically ‘meaningless’ – which has led Ian Kerr and Jena McGill to worry that informational understandings of privacy invoke:

Continue reading