Choosing Winners with Deep Packet Inspection

I see a lot of the network neutrality discussion as one surrounding the conditions under which applications can, and cannot, be prevented from running. On one hand there are advocates who maintain that telecommunications providers – ISPs such as Bell, Comcast, and Virgin – shouldn’t be responsible for ‘picking winners and losers’ on the basis that consumers should make these choices. On the other hand, advocates for managed (read: functioning) networks insist that network operators have a duty and responsibility to fairly provision their networks in a way that doesn’t see one small group negatively impact the experiences of the larger consumer population. Deep Packet Inspection (DPI) has become a hot-button technology in light of the neutrality debates, given its potential to let ISPs determine what applications function ‘properly’ and which see their data rates delayed for purposes of network management. What is often missing in the network neutrality discussions is a comparison between the uses of DPI across jurisdictions and how these uses might impact ISPs’ abilities to prioritize or deprioritize particular forms of data traffic.

As part of an early bit of thinking on this, I want to direct our attention to Canada, the United States, and the United Kingdom to start framing how these jurisdictions are approaching the use of DPI. In the process, I will make the claim that Canada’s recent CRTC ruling on the use of the technology appears to be more and more progressive in light of recent decisions in the US and the likelihood of the UK’s Digital Economy Bill (DEB) becoming law. Up front I should note that while I think that Canada can be read as ‘progressive’ on the network neutrality front, this shouldn’t suggest that either the CRTC or parliament have done enough: further clarity into the practices of ISPs, additional insight into the technologies they use, and an ongoing discussion of traffic management systems are needed in Canada. Canadian communications increasingly pass through IP networks and as a result our communications infrastructure should be seen as important as defence, education, and health care, each of which are tied to their own critical infrastructures but connected to one another and enabled through digital communications systems. Digital infrastructures draw together the fibres connecting the Canadian people, Canadian business, and Canadian security, and we need to elevate the discussions about this infrastructure to make it a prominent part of the national agenda.

The Canadian Situation

In Canada, there has been a substantial amount of attention directed towards the use of DPI equipment since 2007 when CAIP filed a complaint about Bell’s use of the technology to affect how CAIP customers’ data traffic was being transmitted through Bell’s infrastructure. The result of Bell v. CAIP, and the 2008/9 CRTC investigation into how DPI is used by ISPs more widely, was positive in some lights and devastating in others. Positively, out of the 2008/9 investigation the CRTC asserted:

  • the blocking of content is prohibited unless approved by the CRTC;
  • when noticeable degradation of service for time sensitive services occurs, then a traffic management system amounts to controlling the content or influencing its meaning. As such, any actions that create such a degradation require approval by the CRTC;
  • the CRTC affirmed that it works in a complementary fashion with the Privacy Commissioner of Canada and that telecommunications providers are held to a higher standard than that contained in PIPEDA alone. Critically, not only are primary ISPs prohibited from using data gathered from traffic management for anything other than management actions, but “the Commission directs all primary ISPs, as a condition of providing wholesale services to secondary ISPs, to include, in their service contracts or other arrangements with secondary ISPs, the requirement that the latter not use for other purposes personal information collected for the purposes of traffic management and not disclose such information.”
  • economic measures are preferred to technical traffic management processes;
  • the delaying of non-time sensitive services (e.g. email, peer-to-peer, FTP, etc) does not require CRTC approval.

Of course, this didn’t forbid ISPs from using DPI – something that was unlikely to happen – but did put strong conditions on what was and was not permissible use of DPI. What remained permissible was that delaying non-time sensitive services (e.g. email, peer-to-peer, FTP, etc) does not require CRTC approval, and wholesale ISPs both remain affected by DPI and can expect to receive a mere 30 to 60 day notification before primary ISPs make material changes that would affect wholesale ISPs. The privacy element of the ruling was reinforced in the Privacy Commissioner of Canada’s ruling on deep packet inspection, which required Bell to note on their website that personal information (i.e. subscriber ID and IP address) was briefly collected (and then quickly discarded) in the ongoing use of DPI. Emergent from the CRTC and OPC’s decisions, we can comfortably say that Canada has a strong set of regulatory bones when it comes to DPI and network neutrality; what’s left is fleshing the bones out, which will hopefully happen over the coming months and years.

The United States of Inspection

As we turn our gaze south of the 49th parallel, we see that DPI has been used in various ‘exploratory’ ways. Arguably, it was the use of DPI for behavioural advertising – by the company NebuAd and various ISP partners – and incredibly disruptive treatment of peer-to-peer filesharing applications that brought the technology into the media more widely. In the former case, NebuAd partnered with ISPs such as Charter to insert a DPI device in the ISPs’ network environments. Once in the network, it was possible for NebuAd to modify data transfers by creating a new packet and forging the IP address and port information to make the packet appear to come from the original source. Thus, if you were being served a packet from Google or Yahoo!, it would still appear to your computer as though it was delivered from a Google or Yahoo! server. The NebuAd system used TCP’s ACK and SEQ system to stop users’ computers from refusing to accept the forged packet.

Contained within this new packet was a bit of javascript that directed users’ computers to collect a cookie from a NebuAd server; this cookie was then used to track users and to subsequently serve ads that were relevant to the user based on their browsing history. Attempts to delete the cookie led to it simply being re-delivered the next time a user browsed somewhere on the ‘net. This behaviour led researcher Robert Topolski to assert that “NebuAd’s code injected into another’s page source is a cross-site exploit (XSS) and the subsequent behavior of loading cookies it normally would not load is a browser hijack. NebuAd accomplishes its XSS by using what is effectively a classic man-in-the-middle attack.”

In light of the damning evidence that ‘consent’ was never genuinely achieved (in at least one ISP’s case, there was a change to their already massive privacy policy to “inform” customers of the new behaviour) NebuAd was very publicly disciplined in front of the House Telecommunications Subcommittee. Congressman Markey asserted that “Simply providing a method for users to opt-out of the program is not the same has asking users to affirmatively agree to participate in the program.” While NebuAd has lost it’s CEO, is now subject to a class action lawsuit, and itself is dead in the water (though has arguably been reincarnated in the UK as Insight Ready), no legislation have been passed to address behavioural advertising using DPI. A Senate Commerce Committee session in September 2008 led to three of the US’s largest ISPs – AT&T, Verizon, and Time Warner – committing to an “affirmative consent” model for behavioural advertising should the ISPs ever adopt such an advertising system, but no Senate action even attempted to legislate a consent-based model. The Federal Trade Commission (FTC) went ‘so far’ as to advocate for voluntary self-regulation of the industry. This regulation encompassed the following principles;

  1. Transparency and customer control, which maintains that on every website where data is collected for behavioural advertising that customers are informed of this in concise and clear language with the option of choosing whether their information will collected for these purposes.
  2. Reasonable security, and limited retention, of consumer data. In essence, this requires companies to secure data in a manner consistent with FTC data security enforcement and only retain data as long as required for legitimate business purposes.
  3. Affirmative express consent for material change to existing privacy promises. Critical is that this principle is meant to apply even when the material change is a result of a corporate merger when such a merger modifies the ways in which companies collect, use, and share information.
  4. Affirmative express consent to (or prohibition against) using sensitive data for behavioural advertising. This principle does not actually identify what constitutes sensitive information; the FTC sought input into what classes of information should be considered sensitive and whether the collection of such information should be prohibited by regulation instead of by customer choice.

In the case of using DPI for network management purposes, Comcast was found using TCP RST packets to intentionally disrupt peer-to-peer filesharing programs that were accounting for substantial amounts of data traffic along their networks. The stated issue with the programs was that they generated high levels of congestion; in effect, this meant that a large number of customers’ packets were regularly being dropped as Comcast routers struggled to keep pace with the high levels of peer-to-peer traffic. While at one point the company maintained that it only used RST packets during periods of high congestion, it ultimately admitted that their RST-based system was triggered regardless of overall network congestion and at all times of they day.

As a result of Comcast’s use of DPI to target particular applications and application-types the FCC issued an order requiring the ISP to stop their particular mode of network management using their ancillary authority, or authority that implicitly is derived from past judicial rulings, policy contours, congressional mandate, and telecommunications act. Specifically, the FCC required Comcast to;

  1. Reveal the “precise contours” of its network management practices, including the types of equipment used, when they came into use, how they were configured, and where they have been deployed.
  2. Come up with a compliance plan complete with benchmarks that explains how Comcast will move “from discriminatory to nondiscriminatory network management practices by the end of the year.”
  3. Publicly disclose the details of its new practices, “including the thresholds that will trigger any limits on customers’ access to bandwidth.”

The FCC decision was met with two responses from Comcast. First, the company adopted a protocol agnostic solution to dealing with high-bandwidth usage. This saw them move from using deep packet inspection – which examines the payload of data packets – to shallow packet inspection that is (relatively) limited to examining header information. Under the revised approach, where it is evidenced that consumers are engaged in high-bandwidth activities for 15 minutes or longer they have their packets reclassified to “best effort” from the default “priority best effort”.

Second, the company took the FCC to court, arguing that the FCC had exceeded their authority in determining how the corporation can manage their networks. The courts recently returned with a decision, and it was in Comcast’s favor: the FCC’s order that Comcast stop issuing RST packets using DPI equipment is now invalidated on the basis that the FCC decision exceeded their authority. This sends a message that American telecommunications carriers can use equipment, as they perceive needed, to manage their networks and such usage includes mobilizing DPI to invade and disrupt customers’ packet stream. It remains to be seen how this will affect the differentiation between facilities-based VoIP services that Comcast provides and the (apparent) degradation of non-facilities based VoIP services (e.g. Skype) when network congestion occurs: does the FCC have the right to require equal treatment of these types of service? This will be an interesting matter to see unfold in light of the Court’s decision today. It will similarly be interesting to see if, after the decision, ISPs actually use RST packets to disrupt particular traffic flows or instead avoid this approach given the negative press this technique attracted.

So, where does this all leave the US in comparison to Canada? It means that non-regulated processes are exclusively meant to limit the use of behavioural advertising – but, as demonstrated by Chris Soghoian’s work such self-regulation is practically non-regulation in the advertising business – and that the traffic management questions linger in the air. ISPs in the US managed to get a bit more freedom from the FCC with the decision favouring Comcast, and the FTC has been unwilling to strongly regulate ISPs’ uses of DPI. Thus, the American reality stands in stark contrast to Canada: Canadians have a skeleton of regulated guidelines that ISPs are required to adhere to, whereas the US remains a relatively unregulated market for DPI.

A Note on the UK

I’m not a European telecommunication scholar, and so don’t want to make broad statements about Europe, but am slightly more aware of the UK situation. As such, I’ll limit discussions of Europe to the UK.

Behavioural advertising and content management are key issues facing the UK citizenry. In the former’s case, Phorm has been the UK’s NebuAd and partnered with various prominent ISPs to provide an advertising service. The company’s use of DPI was perhaps even more egregious than in NebuAd’s case, insofar as Phorm’s use involves a series of 307 redirects that result in a cookie being placed on a customer’s computer for tracking and advertising purposes (a great technical analysis of Phorm’s system has been performed by Richard Clayton). Most significantly, Phorm forges the cookie so that it appears to come from the originating website, rather than the Phorm system; under this system you receive a cookie that appears to legitimately come from cnn.com when browsing to that website even though it comes from Phorm. Activists came together and have continuously put pressure on Phorm – often arguing that it’s actions are in violation of of the Regulation of Investigatory Powers Act – and the EU Commission is presently bearing down on the UK for its failure to address the privacy-related concerns accompanying this instantiation of behavioural advertising. (Perhaps in response, we now see Phorm scurrying to Brasil – will Brasilian activists take a stand against the company as UK citizens have?)

The content management issue has come up most recently in the form of the Digital Economy Bill (DEB), where there is a real possibility that ISPs will be required to act as a third-party in disputes between rights holders and those who are accused of infringing on holders’ copyrights. ISPs will be required to work against their own customers, insofar as repeat copyright infringers will be subject to some form of traffic throttling. Whether this involves the use of deep packet inspection, or other technological measures, isn’t entirely clear from the bill but many ISPs in the UK are violently opposed to redeveloping their network architecture to shield the copyright industries’ business models. The DEB, as presently written, makes it unclear what ISPs will actually be required to do, but rights holders seem to favor the inspection or analysis of data traffic, an approach to managing data that might lead to content discrimination or an extension of the already functioning discrimination against particular applications and application-types. From my own research perspective, it will be interesting to see if there is an expansion of the uses of CView on the Virgin network should the DEB be realized as law, and whether the EU would step in if enforcing the DEB results in egregious violations of privacy.

It must be recognized that UK ISPs, like their Canadian counterparts, are actively engaged in throttling particular applications and application-types during peak usage times to mitigate network congestion. Orange UK, as an example, throttles what they term ‘dirty’ protocols – those which “consume as much of the available bandwidth that is available”, with the rejoinder that if the ISP gives such protocols and their associated applications an inch they will try to “take a mile”. Orange, of course, is not exceptional: both BT and Virgin also have traffic management policies, as do most other UK ISPs that I’ve studied.

So, where does this leave the UK in contrast to the Canadian regulatory position on deep packet inspection, and content management more broadly? It remains questionable whether the EU will permit behavioural advertising that is based around DPI equipment, but the UK government itself has not come out against the technology in any meaningful way. In Canada, any use of DPI for behavioural advertising runs up against both the CRTC and OPC. UK ISPs are permitted to use traffic management systems, many of which, I suspect (though haven’t done the research yet to demonstrate), utilize systems that are similar to those in North America. Regardless, UK ISPs, like their Canadian counterparts, are involved in choosing the winning applications and application-types for content delivery though are potentially faced with the possibility of having to soon filter particular content. Canadian ISPs have repeatedly stated that they have no desire to filter content for technical, privacy, and business reasons, and it doesn’t look like a Canadian equivalent of the DEB is coming down the pipeline for a while. Unlike efforts such as Comcast’s, where traffic management is protocol agnostic, we see some Canadian and UK ISPs targeting particular methods of content delivery as clean or ‘dirty.’ Ultimately, Canadian and UK ISPs are similar in their respective approaches to traffic management but differ in respect to both behavioural advertising and (possibly) content filtering.

Network Neutrality in a Western Context?

We began with a note on network neutrality, and it seems appropriate to close on one as well. I firmly believe that network operators need to be able to manage their network in a manner that is transparent to the public, effective, and efficient. This may indeed require the implementation and use of technologies such as deep packet inspection. I would hasten to note that not all DPI is created equally; some excel at analyzing content by extracting and matching content signatures, but others are predominantly marketed and used as security appliances, and yet others for subscriber billing. Instead of resisting DPI as a broad technology, we need to focus on opposing some applications of the technology while praising others. While I’m not making an argument that DPI is a ‘neutral’ technology – it’s a surveillance technology with elements of control embedded into it – I do want to suggest that not all surveillance, not all applications of control, are inherently bad. Parents carry baby monitors with them – monitors that have surveillance as a key value embedded into their design – and this is a benign, if not positive, application of surveillance. We need to be mindful and on the watch for damaging surveillance and hindering acts of control while recognizing that some surveillance, some control, is good and required for a functioning contemporary Internet.

In light of my willingness to accept the value of DPI in network environments, I see a stanch opposition to ‘network intelligence’ as considerably far of the mark. Networks are intelligent, and there is nothing wrong with intelligence so long as it is used in a manner that is clearly beneficial for customers, with legislation and regulations precluding the application of network intelligence for negative purposes. Gaining customer acceptance may require transparency on the part of vendors and ISP’s alike; vendors to explain what their technology does and offer ‘virtual tests’ of the technology as is done with some consumer routers, and ISPs to explain why and how they have deployed the equipment. As awareness of how the network is intelligent spreads it is possible to engage is a more substantive discussion about the nature of contemporary networks, the challenges perceived by customers, civil advocates, and network operators. As it stands we are regularly subjected to near-dogmatic language from either camp – network neutrality is a meaningless slogan vs. smart networks are the death of the Internet – that is arguably misleading and polemic.

Given the different regulatory environments, we cannot expect supporters of network neutrality to adopt similar language in their advocacy – RIPA is clearly something that doesn’t enter North American debates, whereas different wiretapping laws and consumer protections are drawn on in American cases, and Canada regularly sees the language of privacy and consumer rights presented by its advocates – but this shouldn’t prevent researchers and other interested parties from identifying common advocacy principles to see what does and does not work. Further, any such comparative project ought to try and identify differences that arise when there is greater transparency (either required by regulation or performed on a voluntary basis) surrounding the development, deployment, and usage of the technologies. This would (and, in Canada, did) enable advocates to more clearly articulate their messages while also alleviating some of the concerns that emerge when our communications systems are mediated by an unknown technical power, in unknown manners, for less than clear corporate means.

Citizens of Canada, the US, and UK need to understand how their communications are regulated and have a clear and valued voice in shaping the structure of their communications systems; citizens along with government and business, as opposed to business and deep packet inspection alone, must be responsible for choosing the ‘winning’ applications that facilitate digital communications across the Internet.