Technology, Thoughts & Trinkets

Touring the digital through type

Month: July 2009 (page 1 of 2)

Deep Packet Inspection: What Innovation Will ISPs Encourage?

InnovationAll sorts of nasty things as said about ISPs that use Deep Packet Inspection (DPI). ISPs aren’t investing enough in their networks, they just want to punish early adopters of new technologies, they’re looking to deepen their regulatory powers capacities, or they want to track what their customers do online. ISPs, in turn, tend to insist that P2P applications are causing undue network congestion, and DPI is the only measure presently available to them to alleviate such congestion.

At the moment, the constant focus on P2P over the past few years has resulted in various ‘solutions’ including the development of P4P and the shift to UDP. Unfortunately, the cat and mouse game between groups representing record labels, ISPs (to a limited extent), and end-users has led to conflict that has ensured that most of the time and money is being put into ‘offensive’ and ‘defensive’ technologies and tactics online rather than more extensively into bandwidth-limiting technologies. Offensive technologies include those that enable mass analysis of data- and protocol-types to try and stop or delay particular modes of data sharing. While DPI can be factored into this set of technologies, a multitude of network technologies can just as easily fit into this category. ‘Defensive’ technologies include port randomizers, superior encryption and anonymity techniques, and other techniques that are primarily designed to evade particular analyses of network activity.

I should state up front that I don’t want to make myself out to be a technological determinist; neither ‘offensive’ or ‘defensive’ technologies are in a necessary causal relationship with one another. Many of the ‘offensive’ technologies could have been developed in light of increasingly nuanced viral attacks and spam barrages, to say nothing of the heightening complexity of intrusion attacks and pressures from the copyright lobbies. Similarly, encryption and anonymity technologies would have continued to develop, given that in many nations it is impossible to trust local ISPs or governments.

Continue reading

Deep Packet Inspection and the Discourses of Censorship and Regulation

boredomIn the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:

  1. They can continue to throttle ‘problem’ applications in the future;
  2. The CRTC decides to leave the wireless market alone right now.

I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.

Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type:

Continue reading

Economics of Authenticity on Twitter

BerealI’m on Twitter all the time; it’s central to how I learn about discussions taking place about Deep Packet Inspection, a good way of finding privacy-folk from around the world, and lets me feel semi-socialized even though I’m somewhat reclusive. When I use the social networking service, I intersperse bits of ‘me’ (e.g. This wine sucks!) beside news articles I’ve found and believe would be useful to my colleagues, and add in some (attempts at) humor. In this sense, I try to make my Twitter feed feel ‘authentic’, meaning that it is reasonably reflective of how I want to present myself in digital spaces. Further, that presentation resonates (to varying extents) with how I behave in the flesh.

When you hear social-media enthusiasts talk about their media environment, authenticity (i.e. not pretending to be someone/something you’re really, absolutely, not) is the key thing to aim for. Ignoring the amusing Heideggerian implications of this use of authenticity (“How very They!), I think that we can take this to mean that there is a ‘currency’ in social media called ‘authenticity’. There are varying ways of gauging this currency. Continue reading

Facebook Got Off Easy: Third-Parties and Data Collection

datadestroyI’m on Facebook, and have been for years. I also dislike Facebook, and have for several years. I don’t dislike the social networking service because it’s bad at what it aims to do, but because it’s far too good at what it does. Let’s be honest: Facebook does not exist to ‘help me connect to my friends’. Maybe that was its aim when it was first dreamt up, but the current goal of Facebook is to make money from my data. Part of this involves Facebook mining my data, and another (and more significant) part entails third-party developers mining my data. I want to think out loud about this latter group and their practices.

A core issue (amongst several others) that Office of the Privacy Commissioner of Canada (OPC) raised in their recent findings about Facebook focused on the data that third-party application developers gain access to when an individual installs an Facebook application. Before getting into this in any depth, I just want to recognize the full range of information that application developers can call on using the Facebook API: Continue reading

« Older posts