Beyond Fear and Deep Packet Inspection

securitybooksOver the past few days I’ve been able to attend to non-essential reading, which has given me the opportunity to start chewing through Bruce Schneier’s Beyond Fear. The book, in general, is an effort on Bruce’s part to get people thinking critically about security measures. It’s incredibly accessible and easy to read – I’d highly recommend it.

Early on in the text, Schneier provides a set of questions that ought to be asked before deploying a security system. I want to very briefly think through those questions as they relate to Deep Packet Inspection (DPI) in Canada to begin narrowing a security-derived understanding of the technology in Canada. My hope is that through critically engaging with this technology that a model to capture concerns and worries can start to emerge.

Question 1: What assets are you trying to protect?

  • Network infrastructure from being overwhelmed by data traffic.

Question 2: What are the risks to these assets?

  • Synchronous bandwidth-heavy applications running 24/7 that generate congestion and thus broadly degrade consumer experiences.

Question 3: How well does security mitigate those risks?

Continue reading

Deep Packet Inspection: What Innovation Will ISPs Encourage?

InnovationAll sorts of nasty things as said about ISPs that use Deep Packet Inspection (DPI). ISPs aren’t investing enough in their networks, they just want to punish early adopters of new technologies, they’re looking to deepen their regulatory powers capacities, or they want to track what their customers do online. ISPs, in turn, tend to insist that P2P applications are causing undue network congestion, and DPI is the only measure presently available to them to alleviate such congestion.

At the moment, the constant focus on P2P over the past few years has resulted in various ‘solutions’ including the development of P4P and the shift to UDP. Unfortunately, the cat and mouse game between groups representing record labels, ISPs (to a limited extent), and end-users has led to conflict that has ensured that most of the time and money is being put into ‘offensive’ and ‘defensive’ technologies and tactics online rather than more extensively into bandwidth-limiting technologies. Offensive technologies include those that enable mass analysis of data- and protocol-types to try and stop or delay particular modes of data sharing. While DPI can be factored into this set of technologies, a multitude of network technologies can just as easily fit into this category. ‘Defensive’ technologies include port randomizers, superior encryption and anonymity techniques, and other techniques that are primarily designed to evade particular analyses of network activity.

I should state up front that I don’t want to make myself out to be a technological determinist; neither ‘offensive’ or ‘defensive’ technologies are in a necessary causal relationship with one another. Many of the ‘offensive’ technologies could have been developed in light of increasingly nuanced viral attacks and spam barrages, to say nothing of the heightening complexity of intrusion attacks and pressures from the copyright lobbies. Similarly, encryption and anonymity technologies would have continued to develop, given that in many nations it is impossible to trust local ISPs or governments.

Continue reading

Background to North American Politics of Deep Packet Inspection

crtc566The CRTC is listening to oral presentations concerning Canadian ISPs’ use of Deep Packet Inspection (DPI) appliances to throttle Canadians’ Internet traffic. Rather than talk about these presentations in any length, I thought that I’d step back a bit and try to outline some of the attention that DPI has received over the past few years. This should give people who are newly interested in the technology an appreciation for why DPI has become the focus of so much attention and provide paths to learn about the politics of DPI. This post is meant to be a fast overview, and only attends to the North American situation given that it’s what I’m most familiar with.

Massive surveillance of digital networks took off as an issue in 2005, when the New York Times published their first article on the NSA’s warrantless wiretapping operations. The concern about such surveillance brewed for years, but (in my eyes) really exploded as the public started to learn about the capacities of DPI technologies as potential tools for mass surveillance.

DPI has been garnering headlines in a major way in 2007, which has really been the result of Nate Anderson’s piece, “Deep packet inspection meets ‘Net neutrality, CALEA.” Anderson’s article is typically recognized as the popular news article that put DPI on the scene, and the American public’s interest in this technology was reinforced by Comcast’s use of TCP RST packets, which was made possible using Sandvine equipment. These packets (which appear to have been first discussed in 1981) were used by Comcast to convince P2P clients that the other client(s) in the P2P session didn’t want to communicate with Comcast subscriber’s P2P application, which led to the termination of the data transmission. Things continued to heat up in the US, as the behavioural advertising company NebuAd began partnering with ISPs to deliver targeted ads to ISPs’ customers using DPI equipment. The Free Press hired Robert Topolski to perform a technical analysis of what NebuAd was doing, and found that NebuAd was (in effect) performing a man-in-the-middle attack to alter packets as they coursed through ISP network hubs. This report, prepared for Congressional hearings into the surveillance of Americans’ data transfers, was key to driving American ISPs away from NebuAd in the face of political and customer revolt over targeted advertising practices. NebuAd has since shut its doors. In the US there is now talk of shifting towards agnostic throttling, rather than throttling that targets particular applications. Discrimination is equally applied now, instead of honing in on specific groups.

In Canada, there haven’t been (many) accusations of ISPs using DPI for advertising purposes, but throttling has been at the center of our discussions of how Canadian ISPs use DPI to delay P2P applications’ data transfers. Continue reading

Draft: What’s Driving Deep Packet Inspection in Canada?

routingpacketsFor the past few weeks I’ve been working away on a paper that tries to bring together some of the CRTC filings that I’ve been reading for the past few months. This is a slightly revised and updated version of a paper that I presented to the Infoscape research lab recently. Many thanks to Fenwick Mckelvey for taking the lead to organize that, and also to Mark Goldberg for inviting me to the Canadian Telecom Summit, where I gained an appreciation for some of the issues and discussions that Canadian ISPs are presently engaged in.

Abstract:

Canadian ISPs are developing contemporary netscapes of power. Such developments are evidenced by ISPs categorizing, and discriminating against, particular uses of the Internet. Simultaneously, ISPs are disempowering citizens by refusing to disclose the technical information needed to meaningfully contribute to network-topology and packet discrimination discussions. Such power relationships become stridently manifest when observing Canadian public and regulatory discourse about a relatively new form of network management technology, deep packet inspection. Given the development of these netscapes, and Canadian ISPs’ general unwillingness to transparently disclose the technologies used to manage their networks, privacy advocates concerned about deep packet networking appliances abilities to discriminate between data traffic should lean towards adopting a ‘fundamentalist’, rather than a ‘pragmatic’, attitude concerning these appliances. Such a position will help privacy advocates resist the temptation of falling prey to case-by-case analyses that threaten to obfuscate these device’s full (and secretive) potentialities.

Full paper available for download here. Comments are welcome; either leave them here on the blog, or fire something to the email address listed on the first page of the paper.

Thoughts: P2P, PET+, and Privacy Literature

p2pwindowPeer-to-peer (P2P) technologies are not new and are unlikely to disappear anytime soon. While I’m tempted to talk about the Pirate’s Bay, or ‘the Pirate Google‘ in the context of P2P and privacy, other people have discussed these topics exceptionally well, and at length. No, I want to talk (in a limited sense) about the code of P2P and how these technologies are (accidentally) used to reflect on what privacy literature might offer to the debate concerning the regulation of P2P programs.

I’ll begin with code and P2P. In the US there have been sporadic discussions in Congress that P2P companies need to alter their UIs and make it more evident what individuals are, and are not, sharing on the ‘net when they run these programs. Mathew Lasar at Ars Technica has noted that Congress is interested in cutting down on what is termed ‘inadvertent sharing’ – effectively, members of Congress recognize that individuals have accidentally shared sensitive information using P2P applications, and want P2P vendors to design their programs in a way that will limit accidental sharing of personal/private information. Somewhat damningly, the United States Patent and Trademark Office declared in 2006 that P2P applications were “uniquely dangerous,” and capable of causing users “to share inadvertently not only infringing files, but also sensitive personal files like tax returns, financial records, and documents containing private or even classified data” (Source).

Continue reading

Update: Network Management, Packet Inspection, and Stimulus Dollars?

200902122010.jpgIain Thomson notes that the stimulus bill that recently cleared the American Congress might work to legitimize ISP packet inspection practices under the guise of ‘network management’. Specifically, the amendment in question reads:

In establishing obligations under paragraph (8), the assistant secretary shall allow for reasonable network management practices such as deterring unlawful activity, including child pornography and copyright infringement.

While Thomson takes this to (potentially) mean that ISPs and major content producers/rights holders might use this language to justify the use of packet inspection technologies, it’s possible that alternate management methods could be envisioned. This said, given that copyright infringement is explicitly noted, there is a very real worry that this might legitimize this clause to push for ISP ‘policing’. Any such effect, I suspect, would further escalate the war between P2P and Media; encryption would become more common and effective, and result in a greater sophistication in avoiding inspection devices. This is a real loss for any and all groups who rely on non-encrypted traffic for intelligence purposes; any drive that will get ‘common folk’ thinking about encrypting more and more of their traffic, accompanied with relatively easy ways of doing so, will substantially hinder the capture of actual content. How you read the implications of this depends on your perspective on privacy and surveillance, but it seems to me that it threatens to further escalate a ‘war’ that criminalizes huge swathes of the population for actions that are relatively harmless.