Draft – Deep Packet Inspection: Privacy, Mash-ups, and Dignities

This is a draft of the paper that I’ll be presenting at the Counter: Piracy and Counterfeit conference in Manchester in a few days. It’s still rough around some edges, but feels like a substantial piece. Comments, as always, are welcome.

Abstract:

Privacy operates as an umbrella-like concept that shelters liberal citizens’ capacity to enjoy the autonomy, secrecy, and liberty, values that are key to citizens enjoying their psychic and civil dignity. As digitisation sweeps through the post-industrial information economy, these same citizens are increasingly sharing and disseminating copywritten files using peer-to-peer file sharing networks. In the face of economic challenges posed by these networks, some members of the recording industries have sought agreements with Internet Service Providers (ISPs) to govern the sharing of copywritten data. In Britain, file-sharing governance has recently manifested in the form of Virgin Media inserting deep packet inspection (DPI) appliances into their network to monitor for levels of infringing files. In this presentation, I argue that ISPs and vendors must demonstrate technical and social transparency over their use of DPI to assuage worries that communications providers are endangering citizens’ psychic and civil dignities. Drawing on recent Canadian regulatory processes concerning Canadian applications of DPI, I suggest that transparency between civil advocacy groups and ISPs and vendors can garner trust required to limit harms to citizens’ psychic dignity. Further, I maintain that using DPI appliances to detect copyright infringement and apply three-strikes proposals unduly threatens citizens’ civil dignities; alternate governance strategies must be adopted to preserve citizens’ civil dignity.

Download paper

IPv6 and the Future of Privacy

There is an increasing urgency to transition to a new infrastructure for addressing space on the Internet, and in this space all individuals and their devices could be uniquely identifiable by their Internet Protocol (IP) address(es). It is in light of this surveillant future that France’s recent ruling that IP addresses are not personally identifiable information is so serious. Further, it is with this longer temporal viewpoint (i.e. not just the here and now) that has more generally worried technologists about governmental rulings concerning binary ‘yes/no IP addresses are private information’.

Before I go any further, let me break down what an IP address is, the distinctions between versions 4 (IPv4) and 6 (IPv6), and then get to the heart of the privacy-related issues concerning the transition to IPv6. The technical infrastructure of the ‘net tends to be seen as dreadfully boring but, as is evidenced by the (possible) computer failures of Toyota vehicles, what goes on ‘under the hood’ of the ‘net is of critical importance to understand and think about. It’s my hope that you’ll browse away with concerns and thoughts about the future of privacy in an increasingly connected biodigital world.

Continue reading

Digital Crises and Internet Identity Cards

Something that you learn if you (a) read agenda-setting and policy laundering books; (b) have ever worked in a bureacratic environment, is that it’s practically criminal to waste a good crisis. When a crisis comes along various policy windows tend to open up unexpectedly, and if you have the right policies waiting in the wings you can ram through proposals that would otherwise be rejected out of hand. An example: the Patriot Act wasn’t written in just a few days; it was presumably resting in someone’s desk, just waiting to be dusted off and implemented. 9/11 was the crisis that opened the policy windows required to ram that particular policy through the American legislative system. Moreover, the ‘iPatriot’ Act, it’s digital equivalent, is already written and just waiting in a drawer for a similar crisis. With the rhetoric ramping up about Google’s recent proclamations that they were hacked by the Chinese government (or agents of that government), we’re seeing bad old ideas surfacing once again: advocates of ‘Internet Identity Cards’ (IICs) are checking if these cards’ requisite policy window is opening.

The concept of IICs is not new: in 2001 (!) the Institute of Public Policy Research suggested that children should take ‘proficiency tests’ at age 11 to let them ‘ride freer’ on the ‘net. Prior to passing this ‘test’ children would have restrictions on their browsing abilities, based (presumably) on some sort of identification system. The IIC, obviously, didn’t take off – children aren’t required to ‘license up’ – but the recession of the IIC into the background of the Western cyberenvironment hasn’t meant that either research and design or infrastructure deployment for these cards has gone away. Who might we identify as a national leader of the IIC movement, and why are such surveillance mechanisms likely incapable of meeting stated national policy objectives but nevertheless inevitable?

Continue reading

APIs, End-Users, and the Privacy Commons

Mozilla is throwing their hat into the ‘privacy commons‘ ring. Inspired by Aza Rankin’s ‘Making Privacy Policies Not Suck‘, Mozilla is trying to think through a series of icons intended to educate users about websites’ privacy policies. This is inspirational, insofar as a large corporation is actually taking up the challenge of the privacy commons, but at the same time we’ve heard that a uniform privacy analysis system is coming before….in 1998. A working draft for the Platform for Privacy Preferences (P3P) was released May 19, 1998 during the still heady-times of people thinking that Privacy Enhancing Technologies (PETs) could secure people’s online privacy or, at least, make them aware of privacy dangers. The P3P initiative failed.

Part of the reason behind P3P’s failure was the length of its documentation (it was over 150% the length of Alice in Wonderland) and the general challenge of ‘properly’ checking for privacy compliance. Perhaps most importantly, when the P3P working group disbanded in 2007 they noted that a key reason behind their failure was “insufficient support for curent Browser implementors”. Perhaps with Mozilla behind the project, privacy increasingly being seen as space of product competition and differentiation, and a fresh set of eyes that can learn from the successes of the creative commons and other privacy initiatives, something progressive will emerge from Mozilla’s effort.

Continue reading

Data Privacy Day and Anonymity

While I haven’t posted much this month, it isn’t because I’m not writing: it’s because what I’m writing just doesn’t seem to pull together very well and so I have 4 or 5 items held in ‘draft’. See, I’ve been trying to integrate thoughts on accessible versus technically correct understandings of technology as it relates to privacy, and to issues on public relations and the use of FUD by privacy activists, and what I think of the idea of ‘anonymity’ in digital environments that are increasingly geared to map, track, and trace people’s action. Given that it’s the data privacy day, I thought that I should try to pull some of thoughts together, and so today I’m going to draw on some of those aforementioned ideas and, in particular, start thinking about anonymity in our present digitally networked world.

To take the ‘effort’ to try and remain anonymous requires some kind of motivation, and in North America that motivation is sorely lacking. North America isn’t Iran or China or North Korea; Canadians, in particular, have a somewhat envious position where even with the government prorogued – a situation that, were it to happen in Afghanistan would have pundits and politicians worrying about possibilities of tyranny and violence – there isn’t a perception that Canadians ought to be fearful that proroguement heralds the beginning of a Canadian authoritarian state, or the stripping of Charter rights and freedoms. This said, I think that people in the West are realizing that, as their worlds are increasingly digitized, their ‘analogue’ expectations of privacy are not, and have not for some time, been precisely mirrored in the digital realm. This awareness is causing worry and consternation, but is not yet (and may never be) sufficient for wide-scale adoption of anonymization technologies. Instead, we have worry without (much) action.

Continue reading

Dispelling FUD: Iran and ISP Surveillance

Since the election of incumbent president Mahmoud Ahmadinejad, the world has witnessed considerable political tension in Iran. Protests over the questionable electoral results, beatings and deaths of political protestors, recurring protests by Iranians associated with the Green Revolution, and transmissions of information amongst civil- and global-actors have been broadcast using contemporary communications systems. Twitter, blogs, Facebook, and mobile phone video has enabled Iranians to coordinate, broadcast, and receive information. The existence of Web 2.0 infrastructure has set the conditions under which the Green Revolution operates.

The Iranian government quickly recognized the power of cheap social coordination technologies and, in response, drastically reduced the capacity of national Internet links – the government, in effect, closed the nation’s Internet faucet, which greatly reduced how quickly data could be transmitted to, and received from, the ‘net as a whole. This claim is substantiated by Arbor Networks’ (Internet) border reports, which demonstrate how, immediately after the presidential election, there was a plummet in the data traffic entering and exiting the nation. (It should be noted that Arbor is a prominent supplier of Deep Packet Inspection equipment.)

Prior to trying to dispel the Fear, Uncertainty, and Doubt (FUD) surrounding the contemporary Iranian ISP-surveillance system that is regularly propagated by the media, I need to give a bit of context on the telecommunications structure in Iran.

Continue reading