Candace Mooers asked me a good question today about deep packet inspection (DPI) in Canada. I’m paraphrasing, but it was along the lines of “how might DPI integrate into the discussion of lawful access and catching child pornographers?” I honestly hadn’t thought about this, but I’ll recount here what my response was (that was put together on the fly) in the interests of (hopefully) generating some discussion on the matter.
I’ll preface this by noting what I’ve found exceptional in the new legislation that was recently presented by the Canadian conservative government (full details on bill C-47 available here, and C-46 here) is that police can require ISPs to hold onto particular information, whereas they now typically required a judicial warrant to compel ISPs to hold onto particular data. Further, some information such as subscriber details can immediately be turned over to police, though there is a process of notification that must immediately followed by the officers making the request. With this (incredibly brief!) bits of the bills in mind, it’s important for this post to note that some DPI appliances are marketed as being able to detect content that is under copyright as it is transferred. Allot, Narus, ipoque, and more claim that this capacity is built into many of the devices that they manufacture; a hash code, which can be metaphorically thought of like a digital fingerprint, can be generated for known files under copyright and when that fingerprint is detected rules applied to the packet transfer in question. The challenge (as always!) is finding the processor power to actually scan packets as they scream across the ‘net and properly identify their originating application, application-type, or (in the case of files under copyright) the actual file(s) in question.
Let’s assume for the purposes of detecting particular files that inspections of packets is largely done offline (i.e. you can copy packets to a separate processing unit, and not worry about examining the packet in absolute real-time) or the devices are quick enough to massively do these analyses on the fly in the relatively near (24 months) future. (As a note: I see the former, rather than the latter, as a more effective technique as the technology stands today, at least in terms of mass surveillance of data traffic. This is just based on my understanding of the computational power available to DPI appliances, and is subject to change as I learn more about the technology/there are advances in processor technologies.) Shouldn’t it be a relatively easy process then for authorities, working in conjunction with network administrators, to develop a hash-list of illegal files, where any time that these files are suspected of crossing the network authorities are automatically notified (DPI is predictive, and thus cannot be relied on to have 100% accuracy rates)? I’m not talking about stuff like files guarded by copyright – the RCMP has noted that they don’t see file sharing as one of their priorities – but stuff that Canadian society deems particularly nasty, such as illicit images of naked children.
With a detailed hash-list of known illegal images/text/movies, then shouldn’t it be a relatively simple process to both limit much of the sharing of these images (when a match is detected, stop the flow of packets ‘tagged’ with that ‘fingerprint’) and notify authorities? Law enforcement could set up an automated system that issues demands to the ISP(s) in question, and then establish procedures to gain access to subscriber information in an effort to quickly find and question those suspected of peddling kiddie porn. This notion of mass surveillance for law enforcement purposes leads us to ask what we, as a society, want these devices used for, or what drivers should motivate the technology; do we want to limit these appliances to balancing network congestion/network load, or go further and try and identify ‘clearly’ criminal actions? I worry about the long-term effects of using DPI for automated surveillance for detecting criminal behaviour, but my willingness to accept a bit more messiness in this world at the expense of increasingly efficient detection of deviance isn’t necessarily a commonly held position…
I ended the interview with Candace (at her request!) by leaving listeners with the questions; “what degree or level of surveillance do we, as a Canadian people, see as ‘good’ on ISP networks – what discrimination (in reference to packet discrimination) is permissible, and what is not ? How do we actually go about developing a consensus on surveillance, and what processes should we engage in to codify said consensus?”
I actually don’t have responses to these queries. There are people who know both surveillance and discrimination literature far better than I likely ever will – my aim (at the moment) is just to puzzle through how this technology might intersect with privacy, surveillance, and discrimination literature, and gradually develop insights from which others can pursue far more nuanced, far more profound ethical thinking about DPI and similar network appliances.