[I recently posted a version of this on another website, and thought that it might be useful to re-post here for readers. For a background on Deep Packet Inspection technologies, I’d refer you to this.]
There is a very real need for various parties who advocate against Deep Packet Inspection (DPI) to really work through what Packet Inspection appliances have done, historically, so that their arguments against DPI are as precise as possible. Packet Inspection isn’t new, and it’s not likely to be going away any time soon – perimeter defences for networks are essential for mitigating spam and viruses (and rely on Medium Packet Inspection).
I’m in no way an expert in the various discussions surrounding DPI (though I try to follow the network neutrality, privacy, and communications infrastructure debates), but I have put together a paper that attempts to clarify the lineage of DPI devices and (briefly) suggest that DPI can be understood as a surveillance tool that is different from prior packet inspection technologies. From a privacy perspective (which is where I sit in relation to the deployment of DPI), it’s important for privacy advocates to understand that approaching the issue from a principle-based approach is fraught with problems at legal, theoretical, and practical levels. The complexities of developing a principle-based approach is one of the reasons why many contemporary privacy scholars (myself included) have opted for a ‘problem-based’ approach to identifying privacy infringements. What, exactly, do most advocates mean when they say that their privacy is ‘violated’? I don’t think that a clear position comes out in the advocate position (maybe it does, and I’m just not aware of it) – they appear to allude to a fundamental right to privacy, while pointing to specific instances as ‘violations’ of that right. The worry with principled approaches is that they are challenged to fully capture what we mean when we say something is private, and equally challenged to capture contextualized social norms of privacy (e.g. streetview in the US versus Japan, bodily privacy in differing cultures, etc etc).
DPI, as I read it, is problematic on the basis that it can potentially be used for widespread, and is currently being used for specific, alteration of communications flows. I’m not referring to just the throttling P2P traffic, but also the alteration of webpages (e.g. Roger’s insertion of messages on webpages) and tracking of individual behaviours and then injecting particular, very relevent, ads to individuals. If we operate on the assumption that communicative privacy is required for a democracy and individual alike to thrive, then the capacity to (almost) invisibly manipulate communications in real time has a debilitating effect on generating authentic discourse. Privacy, in this sense, acts as an umbrella concept, of one that is used to shelter other ‘core’ principles and values, such as autonomy, liberty, and freedom. Without the umbrella, other central values are at risk, and threaten both the individual and individual through compromising the digital communications networks we are so reliant on for discourse and deliberation.
DPI vendors are routinely involved in trying to sell their product – it’s what they do – but I think what is most telling isn’t what vendors say, but what the ISPs’ representatives say. When I talked to a Bell representative recently, and asked whether it mattered to Bell that throttling BitTorrent might affect the dissemination of information, the rep’s response was “they choose that business model, and now they get to live with the consequences of choosing it” (paraphrased). Is the technology itself inherently ‘bad’? I’m not comfortable with that. Are particular uses of the technology ‘bad’? Undoubtably.
The question becomes (as I read it): ‘how do we, as a society, mediate bad uses of technologies?’ Unfortunately, I haven’t figured out a real answer to that yet…