Public and Privacy Policy Implications of PHAC’s Use of Mobility Information

Last week I appeared before the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics to testify about the public and private policy implications of PHAC’s use of mobility information since March 2020. I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If interested, my oral comments are available to download. What follows in this post is the content of the brief which was submitted.

Introduction

  1. I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. While I submit these comments in a professional capacity they do not necessarily represent the full views of the Citizen Lab.
Continue reading

Recording of ‘Traffic Analysis, Privacy, and Social Media’

The abstract for my presentation, as well as references, have already been made available. I wasn’t aware (or had forgotten) that all the presentations from Social Media Camp Victoria were going to be recorded and put on the web, but thought that others visiting this space might be interested in my talk. The camera is zoomed in on me, which means you miss some of the context provided by slides and references to people in the audience as I was talking. (Having quickly looked/listened to some of what I say, I feel as though I’m adopting a presentation style similar to a few people I watch a lot. Not sure how I think about that…The inability to actually walk around – being tethered to the mic and laptop – was particularly uncomfortable, which comes across in my body language, I think.)

Immediately after my presentation, Kris Constable of PrivaSecTech gives a privacy talk on social media that focuses on the inability to control personal information dissemination. Following his presentation, the two of us take questions from the audience for twenty or thirty minutes.

http://bchannelnews.tv/wp-content/plugins/flash-video-player/mediaplayer/player.swf

Apple and Locational Data Sharing

Apple’s entrance into the mobile advertising marketplace was born with their announcement of iAd. Alongside iAd comes persistent locational surveillance of Apple’s customers for the advantage of advertisers and Apple. The company’s advertising platform is controversial because Apple gives it a privileged position in their operating system, iOS4, and because the platform can draw on an iPhone’s locational awareness (using the phone’s GPS functionality) to deliver up targeted ads.

In this post I’m going to first give a brief background on iAd and some of the broader issues surrounding Apple’s deployment of their advertising platform. From there, I want to recap what Steve Jobs stated in a recent interview at the All Things Digital 8 concerning how Apple approaches locational surveillance through their mobile devices and then launch into an analysis of Apple’s recently changed terms of service for iOS4 devices as it relates to collecting, sharing, and retaining records on an iPhone’s geographic location. I’ll finish by noting that Apple may have inadvertently gotten itself into serious trouble as a result of its heavy-handed control of the iAd environment combined with modifying the privacy-related elements of their terms of service: Apple seems to have awoken the German data protection authorities. Hopefully the Germans can bring some transparency to a company regularly cloaked in secrecy.

Apple launched the iAd beta earlier this year and integrates the advertising platform into their mobile environment such that ads are seen within applications, and clicking on ads avoids taking individuals out of the particular applications that the customers are using. iAds can access core iOS4 functionality, including locational information, and can be coded using HTML 5 to provide rich advertising experiences. iAd was only made possible following Apple’s January acquisition of Quattro, a mobile advertising agency. Quattro was purchased after Apple was previously foiled in acquiring AdMob by Google last year (with the FTC recently citing iAd as a contributing reason why the Google transaction was permitted to go through). Ostensibly, the rich advertising from iAds is intended to help developers produce cheap and free applications for Apple’s mobile devices while retaining a long-term, ad-based, revenue stream. Arguably, with Apple taking a 40% cut of all advertising revenue and limiting access to the largest rich-media mobile platform in the world, advertising makes sense for their own bottom line and its just nice that they can ‘help’ developers along the way… Continue reading

Privacy Norms in the Bio-Digital World

pixelatedworldThe Western world is pervaded by digital information, to the point where we might argue that most Western citizens operate in a bio-digital field that is constituted by the conditions of life and life’s (now intrinsic) relationships to digital code. While historically (if 30 years or so can withstand the definitional intonations of ‘historically) such notions of code would dominantly pertain to government databanks and massive corporate uses of code and data, with the advent of the ‘social web’ and ease of mashups we are forced to engage with questions of how information, code, and privacy norms and regulations pertain to individual’s usage of data sources. While in some instances we see penalties being handed down to individuals that publicly release sensitive information (such as Sweden’s Bodil Lindqvist, who was fined for posting personal data about fellow church parishioners without consent), what is the penalty when public information is situated outside of its original format and mashed-up with other data sources? What happens when we correlate data to ‘map’ it?

Let’s get into some ‘concrete’ examples to engage with this matter. First, I want to point to geo-locating trace route data, the information that identifies the origin of website visitors’ data traffic, to start thinking about mashups and privacy infringements. Second, I’ll briefly point to some of the challenges arising with the meta-coding of the world using Augmented Reality (AR) technologies. The overall aim is not to ‘resolve’ any privacy questions, but to try and reflect on differences between ‘specificity’ of geolocation technology, the implications of specificity, and potential need to establish a new set of privacy norms given the bio-digital fields that we find ourself immersed in.

Continue reading

Rendering CCTV (Somewhat) More Transparent

CCTV meets consumerismIn a conversation with Prof. Andrew Clement this summer we got talking about the ever-increasing deployment of CCTV cameras throughout Canada. The conversation was, at least in part, motivated by the massive number of cameras that are being deployed throughout Vancouver with the leadup to the 2010 Olympic games; these cameras were one of the key focuses of the 10th Annual Security and Privacy Conference, where the BC Privacy Commissioner said that he might resign if the surveillance infrastructure is not taken down following the games.

I don’t want to delve into what, in particular, Prof. Clement is thinking of doing surrounding CCTV given that I don’t think he’s publicly announced his intentions. What I will do, however, is outline my own two-pronged approach to rendering CCTV a little more transparent. At the onset, I’ll note that:

  1. My method will rely on technology (augmented reality) that is presently only in the hands of a small minority of the population;
  2. My method is meant to be more and more useful as the years continue (and as the technology becomes increasingly accessible to consumers).

The broad goal is the following: develop a set of norms and processes to categorize different CCTV installations. Having accomplished this task, a framework would be developed for an augmented reality program (here’s a great blog on AR) that could ‘label’ where CCTV installations are and ‘grade’ them based on the already established norms and processes.

Continue reading

Facial Blurring = Securing Individual Privacy?

Google map privacy?The above image was taken by a Google Streetcar. As is evident, all of the faces in the picture have been blurred in accordance with Google’s anonymization policy. I think that the image nicely works as a lightning rod to capture some of the criticisms and questions that have been arisen around Streetview:

  1. Does the Streetview image-taking process itself, generally, constitute a privacy violation of some sort?
  2. Are individuals’ privacy secured by just blurring faces?
  3. Is this woman’s privacy being violated/infringed upon in so way as a result of having her photo taken?

Google’s response is, no doubt, that individuals who feel that an image is inappropriate can contact the company and they will take the image offline. The problem is that this puts the onus on individuals, though we  might be willing to affirm that Google recognizes photographic privacy as a social value, insofar as any member of society who sees this as a privacy infringement/violation can also ask Google to remove the image. Still, even in the latter case this ‘outsources’ privacy to the community and is a reactive, rather than a proactive, way to limit privacy invasions (if, in fact, the image above constitutes an ‘invasion’). Regardless of whether we want to see privacy as an individual or social value (or, better, as valuable both for individuals and society) we can perhaps more simply ponder whether blurring the face alone is enough to secure individuals’ privacy. Is anonymization the same as securing privacy?

Continue reading