For roughly the past two years I’ve been working with colleagues to learn how Automatic Number Plate Recognition (ANPR) systems are used in British Columbia, Canada’s westernmost province. As a result of this research one colleague, Rob Wipond, has published two articles on how local authorities and the RCMP are using ANPR technologies. Last February I disclosed some of our findings at the Reboot privacy and security conference, highlighting potential uses of the technology and many of the access to information challenges that we had experienced with respect to our research. Another, Kevin McArthur has written several pieces about ANPR on his website over the years and is largely responsible for Rob and I getting interested, and involved, in researching the technology and the practices associated with it.
The most recent piece of work to come out of our research is a paper that I, Joseph Savirimuthu, Rob, and Kevin have written. Joseph and I will be presenting it in Florence later this month. The paper, titled “ANPR: Code and Rhetorics of Compliance,” examines BC and UK deployments of ANPR systems to explore the rationales and obfuscations linked to the programs. The paper is presently in a late draft so if you have any comments or feedback then please send it my way. The abstract is below, and you can download the paper from the Social Sciences Research Network.
Automatic Number Plate Recognition (ANPR) systems are gradually entering service in Canada’s western province of British Columbia and are prolifically deployed in the UK. In this paper, we compare and analyze some of the politics and practices underscoring the technology in these jurisdictions. Drawing from existing and emerging research we identify key actors and how authorities marginalize access to the systems’ operation. Such marginalization is accompanied by rhetorics of privacy and security that are used to justify novel mass surveillance practices. Authorities justify the public’s lack of access to ANPR practices and technical characteristics as a key to securing environments and making citizens ‘safe’. After analyzing incongruences between authorities’ conceptions of privacy and security, we articulate means of resisting intrusive surveillance practices by reshaping agendas surrounding ANPR.
Download paper from the Social Sciences Research Network
UPDATE: The paper is now published in the European Journal of Law and Technology
If you spend much time working with computers then you’re likely familiar with metadata, or data about data. In the digital era metadata is relied upon for many of the tagging and categorization systems that are seen in popular web environments, such as Twitter, Digg, Delicious, Facebook, and so forth, and is more generally used to define, structure, and administrate data across all digital environments. I should state, upfront, that metadata is incredibly valuable: nothing that I’m going to write about should leave you with the suggestion that metadata should be removed from the digital landscape or could be removed. Instead I’m advocating for a responsible use of metadata.
In this post I will be drawing on a pair of examples to underscore just how much data is contained in popular metadata structures: the information divulged every time a person tweets on Twitter, and what your mobile phone operator may be giving up to third-parties when you browse the web on your phone. In the latter case, especially, we see that metadata is not just important for routing data traffic but also responsible for disclosing a considerable amount of personal information. I’ll conclude by noting, once again, that our privacy regulators, commissioners, advocates, and researchers need to additional funding if citizens are to have those parties regularly identify ‘bad’ metadata practices and seek rapid remedies before the data ends up being datamined for illicit or unjustifiable reasons.
While I haven’t posted much this month, it isn’t because I’m not writing: it’s because what I’m writing just doesn’t seem to pull together very well and so I have 4 or 5 items held in ‘draft’. See, I’ve been trying to integrate thoughts on accessible versus technically correct understandings of technology as it relates to privacy, and to issues on public relations and the use of FUD by privacy activists, and what I think of the idea of ‘anonymity’ in digital environments that are increasingly geared to map, track, and trace people’s action. Given that it’s the data privacy day, I thought that I should try to pull some of thoughts together, and so today I’m going to draw on some of those aforementioned ideas and, in particular, start thinking about anonymity in our present digitally networked world.
To take the ‘effort’ to try and remain anonymous requires some kind of motivation, and in North America that motivation is sorely lacking. North America isn’t Iran or China or North Korea; Canadians, in particular, have a somewhat envious position where even with the government prorogued – a situation that, were it to happen in Afghanistan would have pundits and politicians worrying about possibilities of tyranny and violence – there isn’t a perception that Canadians ought to be fearful that proroguement heralds the beginning of a Canadian authoritarian state, or the stripping of Charter rights and freedoms. This said, I think that people in the West are realizing that, as their worlds are increasingly digitized, their ‘analogue’ expectations of privacy are not, and have not for some time, been precisely mirrored in the digital realm. This awareness is causing worry and consternation, but is not yet (and may never be) sufficient for wide-scale adoption of anonymization technologies. Instead, we have worry without (much) action.
For the past few weeks I’ve been working away on a paper that tries to bring together some of the CRTC filings that I’ve been reading for the past few months. This is a slightly revised and updated version of a paper that I presented to the Infoscape research lab recently. Many thanks to Fenwick Mckelvey for taking the lead to organize that, and also to Mark Goldberg for inviting me to the Canadian Telecom Summit, where I gained an appreciation for some of the issues and discussions that Canadian ISPs are presently engaged in.
Canadian ISPs are developing contemporary netscapes of power. Such developments are evidenced by ISPs categorizing, and discriminating against, particular uses of the Internet. Simultaneously, ISPs are disempowering citizens by refusing to disclose the technical information needed to meaningfully contribute to network-topology and packet discrimination discussions. Such power relationships become stridently manifest when observing Canadian public and regulatory discourse about a relatively new form of network management technology, deep packet inspection. Given the development of these netscapes, and Canadian ISPs’ general unwillingness to transparently disclose the technologies used to manage their networks, privacy advocates concerned about deep packet networking appliances abilities to discriminate between data traffic should lean towards adopting a ‘fundamentalist’, rather than a ‘pragmatic’, attitude concerning these appliances. Such a position will help privacy advocates resist the temptation of falling prey to case-by-case analyses that threaten to obfuscate these device’s full (and secretive) potentialities.
Full paper available for download here. Comments are welcome; either leave them here on the blog, or fire something to the email address listed on the first page of the paper.