Deep packet inspection (DPI) is a form of network surveillance and control that will remain in Canadian networks for the foreseeable future. It operates by examining data packets, determining their likely application-of-origin, and then delaying, prioritizing, or otherwise mediating the content and delivery of the packets. Ostensibly, ISPs have inserted it into their network architectures to manage congestion, mitigate unprofitable capital investment, and enhance billing regimes. These same companies routinely run tests of DPI systems to better nuance the algorithmic identification and mediation of data packets. These tests are used to evaluate algorithmic enhancements of system productivity and efficiency at microlevels prior to rolling new policies out to the entire network.
Such tests are not publicly broadcast, nor are customers notified when ISPs update their DPI devices’ long-term policies. While notification must be provided to various bodies when material changes are made to the network, non-material changes can typically be deployed quietly. Few notice when a deployment of significant scale happens…unless it goes wrong. Based on user-reports in the DSLreports forums it appears that one of Rogers’ recent policy updates was poorly tested and then massively deployed. The ill effects of this deployment are still unresolved, over sixty days later.
In this post, I first detail issues facing Rogers customers, drawing heavily from forum threads at DSLreports. I then suggest that this incident demonstrates multiple failings around DPI governance: a failure to properly evaluate analysis and throttling policies; a failure to significantly acknowledge problems arising from DPI misconfiguration; a failure to proactively alleviate inconveniences of accidental throttling. Large ISPs’ abilities to modify data transit and discrimination conditions is problematic because it increases the risks faced by innovators and developers who cannot predict future data discrimination policies. Such increased risks threaten the overall generative nature of the ends of the Internet. To alleviate some of these risks a trusted third-party should be established. This party would monitor how ISPs themselves govern data traffic and alert citizens and regulators if ISPs discriminate against ‘non-problematic’ traffic types or violate their own terms of service. I ultimately suggest that an independent, though associated, branch of the CRTC that is responsible for watching over ISPs could improve trust between Canadians and the CRTC and between customers and their ISPs.

We are rapidly shifting towards a ubiquitous networked world, one that promises to accelerate our access to information and each other, but this network requires a few key elements. Bandwidth must be plentiful, mobile devices that can engage with this world must be widely deployed, and some kind of normative-regulatory framework that encourages creation and consumption must be in place. As it stands, backhaul bandwidth is plentiful, though front-line cellular towers in American and (possibly) Canada are largely unable to accommodate the growing ubiquity of smart devices. In addition to this challenge, we operate in a world where the normative-regulatory framework for the mobile world is threatened by regulatory capture that encourages limited consumption that maximizes revenues while simultaneously discouraging rich, mobile, creative actions. Without a shift to fact-based policy decisions and pricing systems North America is threatened to become the new tech ghetto of the mobile world: rich in talent and ability to innovate, but poor in the actual infrastructure to locally enjoy those innovations.

