Analyzing the Verizon-Google Net Neutrality Framework

Technology is neither good or bad. It’s also not neutral. Network neutrality, a political rallying cry meant to motivate free-speech, free-culture, and innovation advocates, was reportedly betrayed by Google following the release of a Verizon-Google policy document on network management/neutrality. What the document reveals is that the two corporations, facing a (seemingly) impotent FCC, have gotten the ball rolling by suggesting a set of policies that the FCC could use in developing a network neutrality framework. Unfortunately, there has been little even-handed analysis of this document from the advocates of network neutrality; instead we have witnessed vitriol and over-the-top rhetoric. This is disappointing. While sensational headlines attract readers, they do little to actually inform the public about network neutrality in a detailed, granular, reasonable fashion. Verizon-Google have provided advocates with an opportunity to pointedly articulate their views while the public is watching, and this is not an opportunity that should be squandered with bitter and unproductive criticism.

I’m intending this to be the first of a few posts on network neutrality.[1] In this post, I exclusively work through the principles suggested by Verizon-Google. In this first, and probationary, analysis I will draw on existing American regulatory language and lessons that might be drawn from the Canadian experience surrounding network management. My overall feel of the document published by Verizon-Google is that, in many ways, it’s very conservative insofar as it adheres to dominant North American regulatory approaches. My key suggestion is that instead of rejecting the principles laid out in their entirety we should carefully consider each in turn. During my examination, I hope to identify what principles and/or their elements could be usefully taken up into a government-backed regulatory framework that recognizes the technical, social, and economic potentials of America’s broadband networks.

Continue reading

The Consumable Mobile Experience

We are rapidly shifting towards a ubiquitous networked world, one that promises to accelerate our access to information and each other, but this network requires a few key elements. Bandwidth must be plentiful, mobile devices that can engage with this world must be widely deployed, and some kind of normative-regulatory framework that encourages creation and consumption must be in place. As it stands, backhaul bandwidth is plentiful, though front-line cellular towers in American and (possibly) Canada are largely unable to accommodate the growing ubiquity of smart devices. In addition to this challenge, we operate in a world where the normative-regulatory framework for the mobile world is threatened by regulatory capture that encourages limited consumption that maximizes revenues while simultaneously discouraging rich, mobile, creative actions. Without a shift to fact-based policy decisions and pricing systems North America is threatened to become the new tech ghetto of the mobile world: rich in talent and ability to innovate, but poor in the actual infrastructure to locally enjoy those innovations.

At the Canadian Telecom Summit this year, mobile operators such as TELUS, Wind Mobile, and Rogers Communications were all quick to pounce on the problems facing AT&T in the US. AT&T regularly suffers voice and data outages for its highest-revenue customers: those who own and use smart phones that are built on the Android, WebOS (i.e. Palm Pre and Pixi), and iOS. Each of these Canadian mobile companies used AT&T’s weaknesses to hammer home that unlimited bandwidth cannot be offered along mobile networks, and suggested that AT&T’s shift from unlimited to limited data plans are indicative of the backhaul and/or spectrum problems caused by smart devices. While I do not want to entirely contest the claim that there are challenges managing exponential increases in mobile data growth, I do want to suggest that technical analysis rather than rhetorical ‘obviousness’ should be applied to understand the similarities and differences between Canadian telcos/cablecos and AT&T.

Continue reading

Digital Crises and Internet Identity Cards

Something that you learn if you (a) read agenda-setting and policy laundering books; (b) have ever worked in a bureacratic environment, is that it’s practically criminal to waste a good crisis. When a crisis comes along various policy windows tend to open up unexpectedly, and if you have the right policies waiting in the wings you can ram through proposals that would otherwise be rejected out of hand. An example: the Patriot Act wasn’t written in just a few days; it was presumably resting in someone’s desk, just waiting to be dusted off and implemented. 9/11 was the crisis that opened the policy windows required to ram that particular policy through the American legislative system. Moreover, the ‘iPatriot’ Act, it’s digital equivalent, is already written and just waiting in a drawer for a similar crisis. With the rhetoric ramping up about Google’s recent proclamations that they were hacked by the Chinese government (or agents of that government), we’re seeing bad old ideas surfacing once again: advocates of ‘Internet Identity Cards’ (IICs) are checking if these cards’ requisite policy window is opening.

The concept of IICs is not new: in 2001 (!) the Institute of Public Policy Research suggested that children should take ‘proficiency tests’ at age 11 to let them ‘ride freer’ on the ‘net. Prior to passing this ‘test’ children would have restrictions on their browsing abilities, based (presumably) on some sort of identification system. The IIC, obviously, didn’t take off – children aren’t required to ‘license up’ – but the recession of the IIC into the background of the Western cyberenvironment hasn’t meant that either research and design or infrastructure deployment for these cards has gone away. Who might we identify as a national leader of the IIC movement, and why are such surveillance mechanisms likely incapable of meeting stated national policy objectives but nevertheless inevitable?

Continue reading

Deep Packet Inspection and the Confluence of Privacy Regimes

insiderouterI learned today that I was successful in winning a Social Sciences and Human Research Council (SSHRC) award. (Edit September 2009: I’ve been upgraded to a Joseph Armand Bombardier Canada Graduate Scholarship). Given how difficult I found it to find successful research statements (save for through personal contacts) I wanted to post my own statement for others to look at (as well as download if they so choose). Since writing the below statement, some of my thoughts on DPI have become more nuanced, and I’ll be interested in reflecting on how ethics might relate to surveillance/privacy practices. Comments and ideas are, of course, welcomed.

Interrogating Internet Service Provider Surveillance:
Deep Packet Inspection and the Confluence of International Privacy Regimes

Context and Research Question

Internet Service Providers (ISPs) are ideally situated to survey data traffic because all traffic to and from the Internet must pass through their networks. Using sophisticated data traffic monitoring technologies, these companies investigate and capture the content of unencrypted digital communications (e.g. MSN messages and e-mail). Despite their role as the digital era’s gatekeepers, very little work has been done in the social sciences to examine the relationship between the surveillance technologies that ISPs use to survey data flows and the regional privacy regulations that adjudicate permissible degrees of ISP surveillance. With my seven years of employment in the field of Information Technology (the last several in network operations), and my strong background in conceptions of privacy and their empirical realization from my master’s degree in philosophy and current doctoral work in political science, I am unusually well-suited suited to investigate this relationship. I will bring this background to bear when answering the following interlinked questions in my dissertation: What are the modes and conditions of ISP surveillance in the privacy regimes of Canada, the US, and European Union (EU)? Do common policy structures across these privacy regimes engender common realizations of ISP surveillance techniques and practices, or do regional privacy regulations pertaining to DPI technologies preclude any such harmonization?

Continue reading

Analysis: ipoque, DPI, and bandwidth management

Bandwidth-exceededIn 2008, ipoque released a report titled “Bandwidth Management Solutions for Network Operators“. Using Deep Packet Inspection appliances, it is possible to establish a priority management system that privileges certain applications’ traffic over others; VoIP traffic can be dropped last, whereas P2P packets are given the lowest priority on the network. Two  modes of management are proposed by ipoque:

  1. Advanced Priority Management: where multi-tiered priorities maintain Quality of Experience (rather than Service) by identifying some packet-types as more important than others (e.g. VoIP is more important than BitTorrent packets). Under this system, less important packets are only dropped as needed, rather than being dropped once a bandwidth cap is met.
  2. Tiered Service Model: This uses a volume-service system, where users can purchase so much bandwidth for particular services. This is the ‘cell-phone’ model, where you sign up for packages that give you certain things and if you exceed your package limitations extra charges may apply*. Under this model you might pay for a file-sharing option, as well as a VoIP and/or streaming HTTP bundle.

The danger with filtering by application (from ipoque’s position) is that while local laws can be enforced, it  opens the ISP to dissatisfaction if legitimate websites are blocked. Thus, while an ISP might block Mininova, they can’t block Fedora repositories as well – the first might conform to local laws, whereas blocking the second would infringe on consumers’ freedoms. In light of this challenge, ipoque suggests that could ISPs adopt Saudi Arabia-like white-lists, where consumers can send a message to their ISP when they find sites being illegitimately blocked. Once the ISP checks out the site, they can either remove the site from the black-list, or inform the customer of why the site must remain listed.

Continue reading

Three-Strikes to Banish Europeans and Americans from the ‘net?

200903281552.jpgThroughout the Global North there are discussions on the table for introducing what are called ‘three-strikes’ rules that are designed to cut or, or hinder, people’s access to the Internet should they be caught infringing on copyright. In the EU, the big content cartel is trying to get ISPs to inspect consumer data flows and, when copywritten content is identified, ‘punish’ the individual in some fashion. Fortunately, it is looking that at least the EU Parliament is against imposing such rules on the basis that disconnecting individuals from the Internet would infringe on EU citizens’ basic rights. In an era where we are increasingly digitizing our records and basic communications infrastructure, it’s delightful to see a body in a major world power recognize the incredibly detrimental and over-reactionary behavior that the copyright cartel is calling for. Copyright infringement does not trump basic civil liberties.

Now, I expect that many readers would say something along this line: I don’t live in the EU, and the EU Parliament has incredibly limited powers. Who cares, this: (a) doesn’t affect me; (b) is unlikely to have a real impact on EU policy.

Continue reading