Technology, Thoughts & Trinkets

Touring the digital through type

Deep Packet Inspection and Law Enforcement

rcmpCandace Mooers asked me a good question today about deep packet inspection (DPI) in Canada. I’m paraphrasing, but it was along the lines of “how might DPI integrate into the discussion of lawful access and catching child pornographers?” I honestly hadn’t thought about this, but I’ll recount here what my response was (that was put together on the fly) in the interests of (hopefully) generating some discussion on the matter.

I’ll preface this by noting what I’ve found exceptional in the new legislation that was recently presented by the Canadian conservative government (full details on bill C-47 available here, and C-46 here) is that police can require ISPs to hold onto particular information, whereas they now typically required a judicial warrant to compel ISPs to hold onto particular data. Further, some information such as subscriber details can immediately be turned over to police, though there is a process of notification that must immediately followed by the officers making the request. With this (incredibly brief!) bits of the bills in mind, it’s important for this post to note that some DPI appliances are marketed as being able to detect content that is under copyright as it is transferred. Allot, Narus, ipoque, and more claim that this capacity is built into many of the devices that they manufacture; a hash code, which can be metaphorically thought of like a digital fingerprint, can be generated for known files under copyright and when that fingerprint is detected rules applied to the packet transfer in question. The challenge (as always!) is finding the processor power to actually scan packets as they scream across the ‘net and properly identify their originating application, application-type, or (in the case of files under copyright) the actual file(s) in question.

Let’s assume for the purposes of detecting particular files that inspections of packets is largely done offline (i.e. you can copy packets to a separate processing unit, and not worry about examining the packet in absolute real-time) or the devices are quick enough to massively do these analyses on the fly in the relatively near (24 months) future. (As a note: I see the former, rather than the latter, as a more effective technique as the technology stands today, at least in terms of mass surveillance of data traffic. This is just based on my understanding of the computational power available to DPI appliances, and is subject to change as I learn more about the technology/there are advances in processor technologies.) Shouldn’t it be a relatively easy process then for authorities, working in conjunction with network administrators, to develop a hash-list of illegal files, where any time that these files are suspected of crossing the network authorities are automatically notified (DPI is predictive, and thus cannot be relied on to have 100% accuracy rates)? I’m not talking about stuff like files guarded by copyright – the RCMP has noted that they don’t see file sharing as one of their priorities – but stuff that Canadian society deems particularly nasty, such as illicit images of naked children.

With a detailed hash-list of known illegal images/text/movies, then shouldn’t it be a relatively simple process to both limit much of the sharing of these images (when a match is detected, stop the flow of packets ‘tagged’ with that ‘fingerprint’) and notify authorities? Law enforcement could set up an automated  system that issues demands to the ISP(s) in question, and then establish procedures to gain access to subscriber information in an effort to quickly find and question those suspected of peddling kiddie porn.  This notion of mass surveillance for law enforcement purposes leads us to ask what we, as a society, want these devices used for, or what drivers should motivate the technology; do we want to limit these appliances to balancing network congestion/network load, or go further and try and identify ‘clearly’ criminal actions? I worry about the long-term effects of using DPI for automated surveillance for detecting criminal behaviour, but my willingness to accept a bit more messiness in this world at the expense of increasingly efficient detection of deviance isn’t necessarily a commonly held position…

I ended the interview with Candace (at her request!) by leaving listeners with the questions; “what degree or level of surveillance do we, as a Canadian people, see as ‘good’ on ISP networks – what discrimination (in reference to packet discrimination) is permissible, and what is not ? How do we actually go about developing a consensus on surveillance, and what processes should we engage in to codify said consensus?”

I actually don’t have responses to these queries. There are people who know both surveillance and discrimination literature far better than I likely ever will – my aim (at the moment) is just to puzzle through how this technology might intersect with privacy, surveillance, and discrimination literature, and gradually develop insights from which others can pursue far more nuanced, far more profound ethical thinking about DPI and similar network appliances.


  1. You knew I was going to comment right?

    I’m a little leery of this method. Its very hard (if not impossible) to tag images. Resizing an image would invalidate any hash generated as a marker.

    There would need to be a massive inventory of known child-porn images, and signatures generated. That signature database would have to be updated regularly and somehow distributed through secure (ie “tamper-proof” channels). If pedophiles are smart, they will resize the images in their collection (or change it from jpgs to gifs or combine images into one, etc. etc. Lots of free tools that will do this kind of process easily.) Any activity that changes the structure of the file.

    So any use of hash signatures would rely on the criminals being stupid. A not unrealistic assumption, but this system would be very easy to defeat.

  2. Oh, I totally agree that what you’re pointing out would be an issue. I don’t think that you’d ever reach 100% enforcement, but do think that *some* images would be caught. I see the issue of signature updates (perhaps incorrectly) as a technical issue that could be overcome if there was dedicated technical competence with some funding behind it.

    While some pedophiles who were smart would likely just resize images, I don’t imagine that authorities would have an issue chasing after the low hanging fruit using a DPI-facilitated method, and more traditional approaches for more intelligent pedophiles.

  3. Agreed.

    For the sake of clarity. This process isn’t DPI. Its content management. As the entire packet stream has to be assembled into its final format, these systems wouldn’t be analyzing the packets. They would reassembling the data portions of the packet (and ejecting the rest) and then analyzing the data for particular matches.

    To understand the difference, this is the process Spam filters or e-mail content management systems use to enforce compliance rules. It may be a subtle distinction to the layman, but its an important distinction from a technology standpoint. (Not to say that DPI appliances can’t be used in conjunction with content management, they certainly can, but they are distinct application types.)

  4. Fair enough – thanks for the appropriate phrasing and brief description!

Leave a Reply

Your email address will not be published.