Technology is neither good or bad. It’s also not neutral. Network neutrality, a political rallying cry meant to motivate free-speech, free-culture, and innovation advocates, was reportedly betrayed by Google following the release of a Verizon-Google policy document on network management/neutrality. What the document reveals is that the two corporations, facing a (seemingly) impotent FCC, have gotten the ball rolling by suggesting a set of policies that the FCC could use in developing a network neutrality framework. Unfortunately, there has been little even-handed analysis of this document from the advocates of network neutrality; instead we have witnessed vitriol and over-the-top rhetoric. This is disappointing. While sensational headlines attract readers, they do little to actually inform the public about network neutrality in a detailed, granular, reasonable fashion. Verizon-Google have provided advocates with an opportunity to pointedly articulate their views while the public is watching, and this is not an opportunity that should be squandered with bitter and unproductive criticism.
I’m intending this to be the first of a few posts on network neutrality. In this post, I exclusively work through the principles suggested by Verizon-Google. In this first, and probationary, analysis I will draw on existing American regulatory language and lessons that might be drawn from the Canadian experience surrounding network management. My overall feel of the document published by Verizon-Google is that, in many ways, it’s very conservative insofar as it adheres to dominant North American regulatory approaches. My key suggestion is that instead of rejecting the principles laid out in their entirety we should carefully consider each in turn. During my examination, I hope to identify what principles and/or their elements could be usefully taken up into a government-backed regulatory framework that recognizes the technical, social, and economic potentials of America’s broadband networks.
I’m in the middle of a large project (for one person), and as part of it I wanted to host some CRTC documents on the project’s web server to link into. You see, if you’ve ever been involved in one of the CRTC’s public notices you’ll know that there are literal deluges of documents, many of which are zipped together. For the purposes of disseminating documents over email this works well – it puts all of the documents from say, Bell, into a single zipped file – but makes a user-unfriendly structure of linking to: expecting casual reader to link to zip archives is unreasonable. Given that as part of this project I do want to facilitate ease of access to resources it’s important that users can link to the documents themselves, and not zip archives.
While I pay attention to copyright developments in Canada and abroad, and have strong stances on how academics and the Canadian government should licence their publications, I’m not a lawyer. I do, however, know that government documents in Canada are governed by Crown Copyright – unlike in the US, the Canadian government maintains copyright over its publications – and thus I wanted to check with the CRTC if there were any problems hosting documents from their site, including those presumably under a Crown copyright such as the CRTC’s decision.
The CRTC is listening to oral presentations concerning Canadian ISPs’ use of Deep Packet Inspection (DPI) appliances to throttle Canadians’ Internet traffic. Rather than talk about these presentations in any length, I thought that I’d step back a bit and try to outline some of the attention that DPI has received over the past few years. This should give people who are newly interested in the technology an appreciation for why DPI has become the focus of so much attention and provide paths to learn about the politics of DPI. This post is meant to be a fast overview, and only attends to the North American situation given that it’s what I’m most familiar with.
Massive surveillance of digital networks took off as an issue in 2005, when the New York Times published their first article on the NSA’s warrantless wiretapping operations. The concern about such surveillance brewed for years, but (in my eyes) really exploded as the public started to learn about the capacities of DPI technologies as potential tools for mass surveillance.
DPI has been garnering headlines in a major way in 2007, which has really been the result of Nate Anderson’s piece, “Deep packet inspection meets ‘Net neutrality, CALEA.” Anderson’s article is typically recognized as the popular news article that put DPI on the scene, and the American public’s interest in this technology was reinforced by Comcast’s use of TCP RST packets, which was made possible using Sandvine equipment. These packets (which appear to have been first discussed in 1981) were used by Comcast to convince P2P clients that the other client(s) in the P2P session didn’t want to communicate with Comcast subscriber’s P2P application, which led to the termination of the data transmission. Things continued to heat up in the US, as the behavioural advertising company NebuAd began partnering with ISPs to deliver targeted ads to ISPs’ customers using DPI equipment. The Free Press hired Robert Topolski to perform a technical analysis of what NebuAd was doing, and found that NebuAd was (in effect) performing a man-in-the-middle attack to alter packets as they coursed through ISP network hubs. This report, prepared for Congressional hearings into the surveillance of Americans’ data transfers, was key to driving American ISPs away from NebuAd in the face of political and customer revolt over targeted advertising practices. NebuAd has since shut its doors. In the US there is now talk of shifting towards agnostic throttling, rather than throttling that targets particular applications. Discrimination is equally applied now, instead of honing in on specific groups.
In Canada, there haven’t been (many) accusations of ISPs using DPI for advertising purposes, but throttling has been at the center of our discussions of how Canadian ISPs use DPI to delay P2P applications’ data transfers. Continue reading
I’ve just posted a document that draws together the CRTC’s February 4, 11, and 12 filings for PN 2008-19. The document ties ISPs with categories of anonymous data for easy reference, and is also meant to contextualize each data set by reproducing the questions that led ISPs to develop these data sets in the first place.
Items of note:
- Responses to question 1 (a) show that, save for a single ISP, ISPs’ annual percentage growth of total traffic volume has decreased. ISPs required to anonymously submit data: Barrett, Bell Canada et al., Cogeco, MTS Allstream, QMI (Videotron), Rogers, Sasktel, Shaw, Telus.
- Responses to question 1 (b) show that the percentage of HTTP/Streaming traffic has increased, two companies report that the percentage of P2P traffic has increased and two report it has decreased slightly, UDP traffic has increased slightly, and the “Other” category now accounts for a smaller percentage of total traffic than in the first months measured. ISPs required to anonymously submit data: Barrett, Bell Canada et al. (for Bell Wireline), Bragg, Rogers, and Shaw.
- Responses to 2 (a) reveal the annual percentage growth of monthly average usage per end-user. We find that growth is occurring on company networks, and that this growth has been uneven (e.g. Company A experienced 16% growth one year, 47% the next, and 13% in the final year). This suggests, to me, that developing an accurate forecast of expected bandwidth growth would be challenging. Without knowing what companies are associated with each data set, it is challenging for analysts to determine if Network Management Technologies might be responsible for the changes in growth rates. ISPs required to anonymously submit data: Barrett, Bell Canada et al. (for Bell Wireline), Cogeco, MTS Allstream, QMI (Videotron), Rogers, and Telus.
- Responses to 2 (b) discuss the percentage growth for ISPs’ top 5% and 10% users. Data for the top 5% shows that two companies experienced negative growth in 2007-2008, one only 2% growth in 2007-2008, and the last a 25% growth. Data for the top 10% shows that two companies experienced negative growth in 2007-2008, one 1% growth, and the last a 25% growth. ISPs required to anonymously submit data: Bell Canada et al. (for Bell Wireline), Cogeco, MTS Allstream, QMI (Videotron), Rogers, and Telus.
- Responses to 2 (c) identify how much of the total traffic that top 5% and 10% users account for. Top 5% account for 37%-56% of total traffic. The top 10% account for 52%-74%. These are fairly damning numbers, given that they clearly demonstrate that massive proportions of the network are being used by a relatively small minority of users. ISPs required to anonymously submit data: Barrett, Bell Canada et al. (for Bell Wireline), Bragg, Cogeco, MTS Allstream, Primus, QMI (Videotron), Rogers, Shaw, and Telus.
- Responses to 2 (d) break down the application usage numbers for the top 5% and 10% of ISPs’ users. For the top 5% of users, HTTP/Streaming has remained relatively constant, P2P use decreased for only one company, UDP traffic is up, and “Other” traffic has decreased for two of three companies. For the top 10% of users, HTTP/Streaming traffic makes up a higher percentage of total traffic, in all but one case P2P traffic represents a larger percentage of total traffic, UDP is up, and “Other” is down for two of three companies. ISPs required to anonymously submit data: Bell Canada et al. (for Bell Wireline), Bragg, and Shaw.