boredomIn the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:

  1. They can continue to throttle ‘problem’ applications in the future;
  2. The CRTC decides to leave the wireless market alone right now.

I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.

Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type:

  1. Such a thing cannot be permitted
  2. Preventing a thing from being said
  3. Denying that such a thing exists

Many people writing about DPI (including myself) have often relied on points (1) and/or (2) in our worries that the technology could be used to absolutely prohibit a particular protocol-type, or that certain things will be prevented from moving across networks using fingerprint and hash-based analyses of data transfers. While no ISP has stated that P2P doesn’t exist (the present traffic management hearings in Canada have seen ISPs focus on P2P as the reason for needing DPI appliances), ISPs such as Rogers have stated that none of their customers have complained about the throttling of P2P traffic.

I think that many censorship discussions about DPI broadly correspond with the worries around ‘netscapes of power’. Winseck (2003) has suggested that such netscapes are intended to “buttress market power and to regulate behaviour through network architecture, the privatization of cyberlaw, surveillance, and the creation of walled gardens.” In a recent paper, “What’s Driving DPI?“, I suggest that we can understand contemporary netscapes through the lens of delayed access to content. Whereas walled gardens let network providers (e.g. AOL Online) perfectly capture subscriber information, and generally try to centralize network intelligence in the network rather than the ends of the network (Zittrain 2008), I would suggest that Deep Packet Inspection can be understood as a productive form of regulation that facilitates access to particular content, without totally denying access to non-preferred content-types and repositories. Unlike the walled garden of AOL Online, Canadian customers can use application-types that their ISPs deem ‘problem applications’ (such as P2P), but using these application-types means that customers will suffer delays. As I will touch on shortly, even a few milliseconds can have significant consequences for using particular content portals; while ISPs claim that a few more minutes to download content is practically nothing, I would suggest that contemporary research puts those claims in question.

Whereas in a netscape of power, such as Winsecks, censorship revolves around the banning of, or prohibiting access to, content, perhaps a contemporary understanding of censorship could relate to development of extensive knowledge concerning individual and their habits, and a willingness to exploit their behaviours. Such censorship does not need to ban content outright; delaying it is sufficient to encourage consumers to use ‘preferred’ data protocols. Moreover, with an awareness of the discrete individual, not just what they do but why they do it, it is possible to preemptively target them for broadband packages using language that appeals to their ‘inner nature’. In effect, instead of talking about ‘censorship’, we can shift to a discussion of ‘consumer and citizen regulation’.

Google recently tested the effects of content delays; they found that when search results were delayed from 100 to 400 milliseconds that users were less and less likely to run searches using Google, even if the service was restored. While the argument could be made that a Google-to-P2P analogy is an apples-to-oranges comparison, I would suggest that Google’s experiment betrays a central consumer truth: in a digital economy, where consumers are regularly taught that ‘faster is better’, the fastest content delivery system that is convenient is (or becomes) the preferred content delivery system. Where P2P access is delayed, and is generally a pain in the butt to get working, non-P2P file transfer systems will be preferred. Where ISPs are also content providers, this means that they can often deliver the same, or similar, content immediately to the individual. In the face of slow or immediate content, consumers will tend to prefer the immediate.

An advantage of thinking through the productive uses of DPI, and its effects in regulating content access, is that we can ask more interesting questions about the discourses and networks of power are in operation around DPI than are afforded through discourses focused on censorship. There are already good people looking at this – Ralf Bendrath and Fenwick McKelvey have both been looking at DPI in this light – and I think that a regulatory framework offers useful ways of understanding the norming effects of DPI. Norming speaks to the mediation of consumers’ desires, aims, and networks of power penetrating them, whereas censorship tends to focus on preventing such networks from spawning, or (worse) obfuscates the actual existence of these networks where they already exist. Further, I think that regulation lets us ask those questions like; “even in the face of CRTC regulation that stops throttling of particular application-types, what are (or have been) the actual effects of regulating data protocols?”

This all having been written, if Google’s test does carry over to the realm of ISPs’ regulation of content delivery mechanisms, then it is possible that even a CRTC decision that prevents subsequent targeting of P2P may not matter: the ‘damage’ might already be done. The real risk, then, is that if ISPs are permitted to target ‘problem’ application types in the future, they can (effectively) encourage and discourage particular protocol uptake. This threatens to situate ISPs as regulators of emerging Internet protocols.

As regulators (and not censors), ISPs can easily navigate public criticism relying on traditional understandings of ‘censorship’ because the ISPs can refer back to that three point model, written above, and say ‘our actions don’t fit that model’. ISPs as regulators are, however, vulnerable to critiques of the very modes and effects of their regulations – they are vulnerable to explicit analyses of the effects of their norming actions (e.g. throttling P2P and making Direct TV available to the same consumer). Only by engaging with, and exploiting the internal logics of, ISPs-as-Content-Providers’ discourses are these discourses likely to be disrupted and shifted towards more appealing regulatory discussions and frameworks. Note that these are shifts, however, as opposed to stopping the discourse entirely. I have serious doubts that such a cessation of discourse on DPI can ever actually take place, though who or what orients future power formations remains an open (and thus actionable) question. Ultimately, as I’m thinking about things right now, it seems that focusing on regulatory discourse is far more promising than almost exclusively attending to a discourse of censorship, given censorship’s vulnerabilities to ISP-generated critique and rebuttal.