Comment: Canadian ISPs and Internet Traffic Management

I’ve recently put up a document that summarized most of the first round of filings for the CRTC’s investigation of Canadian ISP traffic management practices (PN 2008-19), and thought that I’d post a few things that I thought were most interesting (for me). Keep in mind that many of my interests revolve around deep packet inspection.

Network Use Averages

  1. Bell filed their specific data points in confidence, though from what they provided we can see that the top 5% of usage on the network has declined from 61.1% to 46.6%, and the top 10% of network usage has declined from 77.1% to 62.6%.
  2. In TELUS’ case, we find that their retail customers have decreased the amount of content they are uploading, though they are downloading more. Their wholesale customers are both downloading and uploading more than in 2006. Specific traffic data was filed in confidence to the CRTC.
  3. Bell finds that P2P and HTTP/Streaming traffic are the most commonly used end-user categories that contribute to bandwidth usage.

Canadian ISPs Admitting to Traffic Management

  1. Bell Wireline (excludes Bell Mobility and Bell Aliant Atlantic). DPI technology is used, though the vendor and products are filed in confidence.
  2. Cogeco uses DPI, but has filed the vendor and products in confidence.
  3. Rogers filed their comments in confidence, but from past information that has emerged we know that they are using DPI equipment.
  4. Shaw Communications Inc. uses Arbor-Ellacoya devices, though the particular products are filed in confidence.
  5. Barrett Xplore Inc. Uses VoIP prioritization, provisioning of modems, and DPI. Specifics are filed in confidence.
  6. While not explicitly stated, is appears as though Bragg Communications Ltd. also uses DPI.

Canadian ISPs Not Using Traffic Management

  1. MTS Allstream Inc.
  2. SaskTel (though they do use Arbor Peakflow SP, dominantly for network security purposes)
  3. Primus Telecommunications Canada Inc.
  4. Telus

What is Being Filtered/Throttled?

  1. Bell acknowledges that they do throttle traffic between 1630 and 0200 each day by limiting bandwidth available to P2P applications. A detailed listing of applications is not publicly mentioned.
  2. Cogeco currently uses management technologies against: eDonkey/eMule, EmuleEncrypted, Kazaa, Fast Track KaZaA Networking, Napster, Bittorrent, Dijjer, Manolito, Hotline, Share, Soulseek, v-share, Zattoo, Joost, KuGoo, Kuro, DHT, Commercial File Sharing, Baidu Movie, Club Box, Winny, Gnitella, Gnutella Networking, WinMX, Direct Connect, PeerEnabler, Exosee, Further, Filtopia, Mute, NodeZilla, waste, Warez, NeoNet, PPLiveStream Misc, BAIBAO, POCO, Entropy, Rodi, Guruguru, Pando, Soribada, Freenet, PacketiX, Feidian, AntsP@P, Sony Location Free, thunder, Web Thunder. They only look at the specific signature of P2P applications.
  3. Rogers “looks at header information embedded in the payload and session establishment procedures.” What is unclear to me is how they are suggesting that header information is embedded in the payload itself – these are two separate spaces in packets, as I understand networking 101. Specifics P2P that are filtered is not mentioned, though they only concentrate on uploaded content.
  4. Shaw doesn’t say – they’ve filed their findings in confidence.
  5. Barrett doesn’t say – they’ve filed their findings in confidence.
  6. Bragg targets: Bittorrent, News, DirectConnect, Blubster, gnutella, KaZaA, WinMX, eDonkey, Filetopia, Hotline, GuruGuru, Soribada, Soulseek, Ares, JoltID, eMule, Waste, Konspire2b, ExoSee, FurtherNet, MUTE, GNUnet, Nodezilla. Bragg focuses on the packet headers and the behaviour of packet exchanges, and avoiding learning about the content of packet flows.

Under What Conditions Non-Management ISPs Would Manage Their Networks

  1. MTS Allstream notes that only if a capital investment analysis found traffic management technologies to lead to enhanced revenue would they invest in management technologies.
  2. SaskTel has three conditions that would lead them to adopt management technologies: (a) customer demand outstrips capacity and augmentation could not be economically accomplished; (b) if competitive forces require the introduction of alternate service definitions; (c) if there was a need to enforce the aUP so that there was sufficient network capacity for end-users.
  3. TELUS does not currently use management technologies such as DPI, and has no plans to do so.

There is more in the document that is of note, but insofar as it pertains to DPI I thought that these were probably core points that people would be interested in.

Summary: CRTC PN 2008-19; ISP Traffic Managment in Canada

As someone who is academically invested in how the ‘net is being regulated in Canada, I’ve been following the recent CRTC investigation into Internet management practices and regulation with considerable interest. Given that few people are likely to dig though the hundreds of pages that were in the first filing, I’ve summarized the responses from ISPs (save for Videotron’s submissions; I don’t read French) to a more manageable 50 pages. Enjoy!

Update: Thanks to Eric Samson and Daniel for translating Videotron’s filings – you guys rock!

Acer Aspire One Review

I’ve recently become responsible for the upkeep of an Aspire One netbook. My thoughts, thus far: wait for a while, get another model than I did, and dump Linpus as quick as possible. First, I’ll provide the actual specs for the netbook in the house, and then outline my thoughts a bit more.

Acer Aspireone (AOA110-1531 (Refurbished))

  • Sapphire Blue
  • Intel Atom Processor N270 (512KB L2 cache, 1.60GHz, 533MHz FSB)
  • 512BM DDR2 533 SDRAM
  • 8GB SSD
  • Card Reader
  • 802.11b/g WLAN
  • 10/100 LAN
  • Webcam
  • 8.9″ WSVGA (1024X600)
  • 3 cell battery
  • Preload with Linux

The Good

I’ll start with the good points: it’s very light, was very affordable (~$290 Cnd after taxes), and the Linpus OS boots very quickly. The screen is gorgeous, and with decent battery management you can squeeze about 2.5 hours out of it. While I’m not the biggest fan of the keyboard (I’m now very used to the ‘chicklet’ style Apple keyboards) it’s not terrible – I can probably hit about 80% of my average wpm out of it.

The Not-So-Good

Now, let’s talk about what I dislike:

  1. It’s a locked box. Seriously – I’ve broken down my share of notebooks, and while I’ll likely have another go in a week or two, actually accessing the SODIMM slots is hard. Really hard. Hard enough that I’d say either wait until they make getting into the AAO more reasonable, or just buy one with more memory. These little guys are not meant to be opened and modified (you can, but it’s not easy).
  2. Linpus is terrible. There, I said it. It boots quickly, but they’re working off a modified version of Fedora, and when I try to use the add/remove programs I consistently get dependency errors. Is this fixable? Sure. Should I have to fight with the damn OS at the command line so that I can upgrade to Open Office 3.0? No.
  3. Support from other OSes is still in the infancy stages. I’ve worked with Linux before, and I get the ‘Linux is a learning experience, and you can’t expect things to just work.’ That said: I don’t want to be fiddling around with the command line for a few days to get my install working properly. At the moment, I’m just waiting for some bugs with Ubuntu to get ironed out and then Linpus is being replaced.
  4. Linpus doesn’t connect to wireless networks. Well, let me rephrase: it will connect to non-enterprise networks. Anything WPA2-Enterprise or newer, and you’re out of luck until you replace the network manager. When you *do* replace the manager, you run into problems with it not remembering wireless access passwords when you come out of hibernation.
  5. Card reader memory allocation is hit and miss. Apparently, when you don’t tinker with anything, you can insert an SD card and that SD card is dynamically added to the available flash memory available to the OS. This is cool – I got an 8GB SD card to slide into the AAO, which would give it a cool 16GB of total internal storage – more than enough for casual browsing and word processing. The catch: as soon as you make the needed modifications to access the OS proper, you have to manually mount the SD card each time you turn on the computer, or bring it out of hibernation. While this wouldn’t be an issue if I didn’t want to unlock the OS, I don’t think that using the computer as a computer should mean that this breaks.
  6. Ships with Firefox 2.0. I mean, really – FF3.0 has been out for a long time. Why the hell is it shipping with FF2.0?
  7. Terrible SSD HDD. I get a 7.x MB write to the disk. Enough said.

If you’re looking to buy one of these, get a version with a spinning disk drive. That said, if you want a netbook that is just going to rock out of the box, I’d suggesting getting the HP 2140 – it’ll be a bit more expensive, but I think you’ll be a lot happier. Maybe I’ll change my tune once Ubuntu is loaded on the netbook. In fairness, I should note that I’m being picky (because it’s what I do with these kinds of things), and the person actually using the computer doesn’t have these complaints – it does what it needs to (though not being able to access WPA2-Enterprise has caused them problems). This said, I think that items (1), (2), (4), (5), (6) and (7) really are showstoppers, though (2) and (4) will both be alleviated by changing OSes, (5) is resolvable by shifting where documents and such are saved to, and (6) is solved by hitting up the command line a bit.

My rating: 3/5

Review: Privacy On The Line

This updated edition of Diffie and Landau’s text is a must-have for anyone who is interested in how encryption and communicative privacy politics have developed in the US over the past century or so. Privacy On The Line moves beyond a ‘who did what’ in politics, instead seeing the authors bring their considerable expertise in cryptography to bear in order to give the reader a strong understanding of the actual methods of securing digital transactions. After reading this text, the reader will have a good grasp on what types of encryption methods have been used though history, and strong understandings of the value and distinction between digital security and digital privacy, as well as an understanding of why and how data communications are tracked.

The only disappointment is the relative lack of examination of how the US has operated internationally – there is very little mention of the OECD, nor of European data protection, to say nothing of APEC. While the authors do talk about the role of encryption in the context of export control, I was a bit disappointed at just how little they talked about the perceptions of American efforts abroad – while this might have extended slightly beyond the American-centric lens of the book, it would have added depth of analysis (though perhaps at the expense of making the book too long for traditional publication). One of the great elements of this book is an absolutely stunning bibliography, references, and glossary – 106 pages of useful reference material ‘fleshes out’ the already excellent analysis of encryption in the US.

Ultimately, if you are interested in American spy politics, or in encryption in contemporary times, or in how these two intersect in the American political arena, then this text is for you.

Questions of Digitizing Identity

A common element of the (various) streams of thought that I’m usually engaged in surrounds the question of identity. What constitutes identity? How is this constitution being modulated (or is it?) in digital spaces? What can past and contemporary theorists offer us, in response to these questions? What are the strengths of these responses, and what are their weaknesses?

Over the next six months or so, I want to begin taking up these questions more seriously. I plan to begin constructing an account in order to gain a better appreciation for both how granularly we often attempt to separate identities, and how at the same time those are often shared, surveyed, or otherwise modified without our ever being aware. My thoughts are that a core difference between ‘analogue’ and ‘digital’ identities follows from the (relative) ease of surveying and modifying digital identities without the source of that identity ever being made aware. While unobtrusive surveillance is possible in an analogue space, there is an emphasis in the West on the development of homogeneous protocols that are intended to facilitate the diffusion of data across digital pathways, and this carries with it new ways of collating and modulating available dataflows. Continue reading

Building Platforms for the Future

In this post I want to think, just a little bit, about the role of platforms and how I’m attempting to maneuver this space. This is aimed at better clarifying (for me) how this space is used, as well as to render its use more transparent (which is apparently a core facet of building successful platforms *grin*).

A few weeks ago I was linked to a blog page that discussed the role of platforms in opening up future publishing-related avenues. The principles of the post could be boiled down to the follow:

  1. Speak authentically;
  2. Speak regularly;
  3. Speak in an open, transparent fashion;
  4. Speak so that the development of ideas is clear;
  5. Speak so that you are demonstrating your authority.


These are, generally, pretty traditional tropes surrounding Web 2.0. There isn’t anything terrifically new. What interests me, however, at the fourth and fifth items. In rendering transparent the development of ideas, what I think is most helpful is that a final, codified, textual work is opened up to the reader. Should they be sufficiently interested in the work, they can move to see how an argument was, and wasn’t, developed. By seeing how and why a writer has cordoned off particular threads of thought it is possible to open new and interesting approaches to a critique – as a firm advocate that reading theory is greatly assisted by understanding the author’s life and situation, and blogging (or establishing a similarly open platform) gives readers a way of getting to ‘know’ the author beyond the 100 word summary at the end of a book.

Continue reading