Rogers, Network Failures, and Third-Party Oversight

Photo credit: Faramarz HashemiDeep packet inspection (DPI) is a form of network surveillance and control that will remain in Canadian networks for the foreseeable future. It operates by examining data packets, determining their likely application-of-origin, and then delaying, prioritizing, or otherwise mediating the content and delivery of the packets. Ostensibly, ISPs have inserted it into their network architectures to manage congestion, mitigate unprofitable capital investment, and enhance billing regimes. These same companies routinely run tests of DPI systems to better nuance the algorithmic identification and mediation of data packets. These tests are used to evaluate algorithmic enhancements of system productivity and efficiency at microlevels prior to rolling new policies out to the entire network.

Such tests are not publicly broadcast, nor are customers notified when ISPs update their DPI devices’ long-term policies. While notification must be provided to various bodies when material changes are made to the network, non-material changes can typically be deployed quietly. Few notice when a deployment of significant scale happens…unless it goes wrong. Based on user-reports in the DSLreports forums it appears that one of Rogers’ recent policy updates was poorly tested and then massively deployed. The ill effects of this deployment are still unresolved, over sixty days later.

In this post, I first detail issues facing Rogers customers, drawing heavily from forum threads at DSLreports. I then suggest that this incident demonstrates multiple failings around DPI governance: a failure to properly evaluate analysis and throttling policies; a failure to significantly acknowledge problems arising from DPI misconfiguration; a failure to proactively alleviate inconveniences of accidental throttling. Large ISPs’ abilities to modify data transit and discrimination conditions is problematic because it increases the risks faced by innovators and developers who cannot predict future data discrimination policies. Such increased risks threaten the overall generative nature of the ends of the Internet. To alleviate some of these risks a trusted third-party should be established. This party would monitor how ISPs themselves govern data traffic and alert citizens and regulators if ISPs discriminate against ‘non-problematic’ traffic types or violate their own terms of service. I ultimately suggest that an independent, though associated, branch of the CRTC that is responsible for watching over ISPs could improve trust between Canadians and the CRTC and between customers and their ISPs.

What’s Going On?

Rogers has publicly stated that they are predominantly concerned with managing upstream traffic, claiming that without throttling it they risk “becoming the world’s buffet.” As a result, the company uses DPI appliances to delay uploading data to the Internet; downloads are unaffected. Their technology, “looks at the header information embedded in the payload and session establishment procedures” to identify peer-to-peer based upload traffic. If such traffic is identified it is put into a portion or allocation of the network dedicated to upstream peer-to-peer traffic. Further, Rogers’ network management policy states that “For Rogers Hi Speed Internet (delivered over cable) and Portable Internet from Rogers customers, the maximum upload speed for P2P file sharing traffic is 80 kbps at all times. There are no limits on download speed for any application or protocol.”

Unfortunately, it appears as though a badly tested update to Rogers’ DPI equipment has had unintended consequences. Customers that previously enjoyed very fast downloads using P2P clients – often several Mb/s – have seen their download speeds sharply curtailed to the point where some users are reporting maximum speeds of under 100kb/s. Moreover, it isn’t just just P2P applications that are being affected; Keith McArthur, Rogers’ senior director of social media and digital communications, has publicly confirmed that non-P2P applications are being affected by this misconfiguration. Specifically;

As some of you are aware, Rogers recently made some upgrades to our network management systems that had the unintended effect of impacting non-p2p file sharing traffic under a specific combination of conditions. Our network engineering team is working on the best way to address this issue as quickly as possible. However, I’m not able to provide any updates at this time about when this will be fixed. Our network management policy remains unchanged. You can find details of our policy here (»www.rogers.com/web/content/netwo···nagement). We are working hard to ensure that there are no gaps between our policy and the technology that enables that policy.

Keith’s public statement came about a month after people began reporting this problem (September 20, 2010) and after his comment the problem remains unresolved over a month later (now December 3, 2010). There has been a massive delay in recognizing a problem, and an even more massive delay in resolving it.

Problems in Governance

Since September, one forum user has reportedly submitted a complaint to the CRTC. The result is that Rogers has to either reverse its present policies and stop throttling downloads or change their terms of service to reflect their current practice of throttling downstream traffic. While Rogers is to be commended for leaving a comment in a public forum and acknowledging the problem, they have not been particularly proactive in notifying their end-users about the problems with the company’s DPI appliances. As noted in the threads on DSLreports, low level technical staff ascribe degraded service of P2P and non-P2P applications alike to customers’ use of P2P applications. While there may be a correlate relationship, the root cause (improperly configured network infrastructure) is not being identified over the phone.

Such ascriptions indicate that customer service has not been properly notified of DPI-related network degradation problems. Though the senior director of social media and digital communications is aware of these problems, no notice is posted on their social-media inspired RedBoard website or to be found on their traditional corporate website.

To begin, this failure of network configuration suggests that Rogers’ testing system needs to be refined. I expect that Rogers’ professional networking staff tested the network updates – either in an isolated test network that replicates real-world conditions or in a small portion of their production network. Doing anything else would constitute an incredibly arrogant and inappropriate deployment regime, and I cannot believe that Rogers’ networking staff would behave in such an unprofessional manner. What is more likely is that the micro-level tests were either too narrow or the derived findings were misunderstood/ambiguous. Such a failure in the testing regime demands a reevaluation of how engineers make upgrades to the Rogers networks and is especially important given that the error has resulted in a material degradation of service – a change that requires Rogers to notify various actors prior to the modification.

The lack of widespread attention to the problem – at customer service, at their informal website or at their formal corporate website – indicates an additional issue concerning staff and (by extension) customer education. Customers are unlikely to know the source of their network-related problem because Rogers has only acknowledged the misconfiguration in limited channels. A customer shouldn’t have to (and is unlikely to) dig into the depths of a specialized web forum to learn about material changes that have affected their network service for a prolonged period of time, regardless of whether the changes are intentional or not.

Finally, the misconfiguration of Rogers’ equipment shows a failure to proactively notify customers of problems. I’ve contacted a host of Rogers customers over the past day, asking similar questions: Are you experiencing particular degradations of service? (All responses: yes.) Have you been contacted about the problem by Rogers? (All responses: no.) While I appreciate that it would be challenging to call every single customer, a mass email to all Rogers customers would not be a financially expensive operation, nor would a posting on their corporate website. That the company has remained relatively quiet about known issues on its network for over 60 days, knowing that network changes have had material impacts on the quality of service and that are in violation of their network management policy, speaks poorly of the company’s willingness to openly address the problem.

The Impacts of Control

There are consequences associated with running a partially controllable network, a network that is “generally open to new applications, but can be used to block them selectively” (van Schewick 2010: 288). Shifting network architectures away from the end-to-end model and towards applianced models of network connectivity “increases the relative costs of innovation and decreases the relative benefits for independent innovations” (van Schewick 2010: 289). Such changes threaten the development of novel applications that could improve the utility derived from Internet access, as well as potentially imposing constraints on technology and (metaphorically) killing the goose that lays the golden egg (Greenstein 2001: 390).

DPI has been deployed to provide ISPs with insight into, and control over, their customers’ data transmissions. Such insight is needed because applications at the ends of the network are less and less trustworthy; port obfuscation, payload encryption, randomized initial packet exchanges and more are designed to hide what applications customers are using. ISPs assert that they need to better understand the packets in their entirety to properly identify applications and transit their associated packets. In essence, ISP routers cannot trust applications to ‘honestly’ disclose their packets and so ISPs aim to ‘restore’ this trust by inspecting most/all packets that go through their routers. Thus, restoring trust has led ISPs to increase middle-network intelligence and required customers to trust network providers more than when providers operated as ‘simple’ transit networks.

The problem with adding intelligence into the middle of the network is that middle-network failures have broader impacts than failures at the ends. Per Blumenthal and Clark (2001):

Network designers make a strong distinction between two sorts of elements – those that are “in” the network and those that are “attached to,” or “on,” the network. A failure of a device that is “in” the network can crash the network, not just certain applications; its impact is more universal. The end-to-end argument at this level thus states that services “in” the network are undesirable because they constrain application behaviour and add complexity and risk to the core (201).

Blumethal and Clark’s approach to the end-to-end principle restricts the ‘narrow’ version of the end-to-end argument that van Schewick has identified. The narrow version of end-to-end asserts that “A function should only be implemented in a lower layer, if it can be completely and correctly implemented at that layer. Sometimes an incomplete implementation of the function at the lower layer may be useful as a performance enhancement” (2010: 58). Such narrow approaches to the end-to-end principle were meant to try and help implement applications, whereas many present understandings of this principle are used to justify hostile intentions, seeing ISP engineers prevent things from happening on the network and blocking certain applications (Blumethal and Clark 2001: 106-7). The effect overall is to reduce the generativity of network itself, reducing its “capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences” (Zittrain 2008: 70).

Finally, the appropriateness of network control varies depending on the reader’s understanding of the term ‘network management’. The issue with ‘reasonable network management’ language is that it tends not to describe an engineering principle but a policy decision. Such policy decisions are made by weighing legitimate technical and business goals with what society will bear in regards to principles such as user privacy. Thus, reasonable network management is unlikely to correlate with Paul Ohm’s definition, where the term exclusively refers to:

…the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance, and provisioning of networked systems (51).

In aggregate, the introduction of control has a series of impacts. Realigning where intelligence is located in the network changes the risks and cost/benefit structure for innovators at the ends of the network. That ISPs such as Rogers can have misconfigurations lasting over 60 days that detrimentally affect P2P and non-P2P applications alike is problematic. Individuals are unlikely to know who is to blame and such misconfigurations may increase unpredictability of application discrimination to the point where innovators and developers abandon or limit Internet-interfaced application creation. If application-like services from the ISP continue to work (e.g. Rogers On Demand Online) people may be led away from non-proprietary streaming and content delivery services in favour of the ISP’s monetized systems. Moreover, when the ISP’s own services are not impacted by network misconfigurations there is less of an incentive for their engineers to quickly resolve the problem.

Finally, the quiet (if accidental) increase in network control also has the effect of potentially undermining the trustworthiness of the network itself. If DPI was (in part) installed because of untrustworthiness at the ends, now consumers and developers alike have less reason than before to trust the middle and core of the networks. Trust and transparency, it seems, are lacking throughout the network.

Third-Party Oversight

The capacity for large ISPs to modify data transit conditions in a seemingly randomized manner is made possible by the packet monitoring and control systems now grafted into ISPs’ networks. Given the impacts that control can have on the future of telecommunications a trusted third-party is needed. This party should monitor how ISPs govern data traffic, alerting citizens and regulators alike if ISPs are found discriminating against ‘non-problematic’ traffic types or violating their own terms of service. Such a party does not necessarily need to dogmatically require all ISP actions fit within the end-to-end principle. Let me illustrate what this might mean.

While Jonathan Zittrain worries about the installation of intelligence into the network he also argues that we must abandon strict adherence to end-to-end neutrality. Zittrain asserts that we would be well served to replace the end-to-end principle with a generativity principle, “a rule that asks that any modifications to the Internet’s design or to the behaviour of ISPs be made when they will do least harm to generative possibilities” (2008: 165). For such a system to be adopted, however, there must be some third-party that is technically competent and that can audit what ISPs are doing to their networks.

The hope is that by introducing a third-party between customers and ISPs some of the mutual antagonism between these two parties might be alleviated, whilst also reducing some of the privacy concerns associated with DPI more generally. Specifically, the third-party would lack a profit-based motivation to access personal information and could, as part of its mandate, oversee the limitation of ISPs’ access to personal information where the information isn’t relevant for business purposes.

While key-signing authorities could theoretically operate as one of the neutral third-parties, there remains a question of trusting the third-party itself. VeriSign, as an example, presently works alongside American copyright groups and changes DNS entries for some .com addresses and could do the same for .net addresses. As a result, VeriSign couldn’t be considered a trusted third-party because of this partisan behaviour. Thus, any party exercising oversight of ISPs ought to be composed of a set of neutral third-parties so that if/when a member reveals itself as no longer trustworthy the entire oversight committee/board/organization doesn’t collapse.

Such an oversight body (in Canada) could be associated with, but independent of, the CRTC. The body ought to be resourced regardless of whether its investigations embarrass ISPs or its regulatory parent. Its acting commissioner should be appointed for a significant period of time. Further, the commissioner should retain independent authority over who to hire, within requirements set by the CRTC. Anticompetitive actions or those in breech of acceptable use policies, network policy agreements, service level agreements or privacy policies should be fully disclosed to the public by this independent body. ISPs could not claim confidentiality to hide their actions or network configurations when their actions or network configurations violate their public statements, agreements, or CRTC decisions. The threat of this transparency into ISP network operations could and should cause ISPs to be more cautious and measured in their actions, reducing the likelihood of network misconfigurations or at least limiting the duration of misconfigurations. Additionally, this body might generate trust with the public by separating its policies from the more formal regulatory hearings at the CRTC.

Is such an oversight body a pipe dream? Perhaps, but not an entirely unreasonable one. The CRTC is increasingly under pressure by members of the public to be more transparent or dissolve, and telecommunications companies in general are distrusted by Canadians. Adopting an independent oversight board – one solely responsible for audits and oversight of ISP networks, and ensuring compliance with existing CRTC policies – could realign the trust Canadians put in carriers and practically demonstrate the value and legitimacy of the CRTC to the Canadian people.

Book Sources:

Blumenthal, Marjory S. and Clark, David D. (2001). “Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World” in B. M. Compaine and S. Greenstein (eds.). Communications Policy in Transition: The Internet and Beyond. Cambridge, Mass.: The MIT Press.

Greenstein, Shane. (2001). “Copyright in the Age of Distributed Applications” in B. M. Compaine and S. Greenstein (eds.). Communications Policy in Transition: The Internet and Beyond. Cambridge, Mass.: The MIT Press.

van Schewick, Barbara. (2010). Internet Architecture and Innovation. Cambridge, Mass.: The MIT Press.

Zittrain, Jonathan. (2008). The Future of the Internet and How to Stop It. New Haven: Yale University Press.

6 thoughts on “Rogers, Network Failures, and Third-Party Oversight

  1. Quick question, are you sure it was a DPI appliance issue and not QoS configuration issues at the router level? Both could cause this type of problem, but one is DPI and one is not.

    Like

  2. @Catelli

    Given that it seems to be more of an issue around the allocation of bandwidth – a task that’s been given to Rogers’ DPI equipment and not QoS policies at routers (as I read their public documents) – I’m inclined to believe it’s a DPI related issue.

    Like

  3. Nice article, I’m the one from dslreports that filed a complaint with the CRTC. I’ve also sent all my info off to Michael Geist and he will be doing something with it.

    Like

  4. I’ve had similar issues with bell DSL throttling streaming Video.

    I’ve contacted the CRTC and they say that you must file a written letter in complaint with as much specifics as possible.

    Best way is to send it registered so that you can follow up on your communication.

    Like

  5. all i did was submit a complaint through the CRTC sites and quoted various parts of their net neutrality rules and which ones Rogers were breaking. I’ll be posting all of the e-mails form Rogers and the CRTC’s side shortly, hopefully a few others points could be chime in and provide more information to the CRTC.

    If I could get any names/contact information from Rogers customers then it would also help since their Legal counsel (Ken Thompson) claimed I’m the only one that has ever complained about this issue to them.

    justin a/t mckill d/o/t ca is my info.

    Like

Comments are closed.