In the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:
- They can continue to throttle ‘problem’ applications in the future;
- The CRTC decides to leave the wireless market alone right now.
I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.
Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type:
[Note – I preface this with the following: I am not a lawyer, and what follows is a non-lawyer’s ruminations of how the Supreme Court’s thoughts on reasonable expectations to privacy intersect with what deep packet inspection (DPI) can potentially do. This is not meant to be a detailed examination of particular network appliances with particular characteristics, but much, much more general in nature.]
Whereas Kyllo v. United States saw the US Supreme Court assert that thermal-imaging devices, when directed towards citizens’ homes, did constitute an invasion of citizens’ privacy, the corresponding Canadian case (R. v. Tessling) saw the Supreme Court assert that RCMP thermal imaging devices did not violate Canadians’ Section 8 Chart rights (“Everyone has the right to be secure against unreasonable search or seizure”). The Court’s conclusions emphasized information privacy interests at the expense of normative expectations – thermal information, on its own, was practically ‘meaningless’ – which has led Ian Kerr and Jena McGill to worry that informational understandings of privacy invoke:
Immanuel Kant’s essay “On the Common Saying: ‘This May be True in Theory, but it does not Apply in Practice'” argues that theory is central to understanding the world around us and that, moreover, attempts to say that ‘theory doesn’t apply to the world as such’ are generally misguided. Part of the reason that Kant can so firmly advocate that theory and reality are co-original emerge from his monological rationalism, but at the same time time we see him argue that the clearest way to bring theory and practice into alignment is with more theory – rather than adopting ‘parsimonious’ explanations of the world we would be better off to develop rigorous and detailed accounts of the world.
Parsimony seems to be a popular term in the social sciences; it lets researchers develop concise theories that can be applied to particular situations, lets them isolate and speak about particular variables, and lends itself to broad(er) public accessibility of the theory in question. At the same time, theorists critique many such parsimonious accounts because they commonly fail to offer full explanations of social phenomena!
The complexity of privacy issues in combination with a desire for parsimony has been a confounding issue for privacy theorists. Nailing down what ‘privacy’ actually refers to has been, and continues to be, a nightmarish task insofar as almost every definition has some limiting factor. This problem is (to my mind) compounded when you enter online, or digital, environments where developing a complete understanding of how data flows across systems, what technical languages’ demands underlie data processing systems, and developing a comprehensive account of confidentiality and trust, are all incredibly challenging and yet essential for theorization. This is especially true when we think of a packet as being like post card (potentially one with its content encrypted) – in theory anyone could be capturing and analyzing packet streams and data that is held on foreign servers.
This is a full draft of the paper on Twitter and privacy that I’ve been developing over the past few weeks, entitled ‘Who Gives a ‘Tweet’ About Privacy?’ It uses academic privacy literature to examine Twitter and the notion of reasonable expectations of privacy in public, and is written to help nuance privacy discussions surrounding the discourse occuring on Twitter (and, implicitly, similar social networking and blogging sites). The paper focuses on concepts of privacy and, as such, avoids deep empirical analyses of how the term ‘privacy’ is used by particular members of the social networking environment. Further, the paper avoids delving into the web of legal cases that could be drawn on to inform this discussion. Instead, it is theoretically oriented around the following questions:
- Do Twitter’s users have reasonable expectations to privacy when tweeting, even though these tweets are the rough equivalent of making statements in public?
- If Twitter’s user base should hold expectations to privacy, what might condition these expectations?
The paper ultimately suggests that Daniel Solove’s taxonomy of privacy, most recently articulated in Understanding Privacy, offers the best framework to respond to these question. Users of Twitter do have reasonable expectations to privacy, but such expectations are conditioned by juridical understandings of what is and is not reasonable. In light of this, I conclude by noting that Solove’s use of law to recognize norms is contestable. Thus, while privacy theorists may adopt his method (a focus on privacy problems to categorize types of privacy infractions), they might profitably condition how and why privacy norms are established – court rulings and dissenting opinions may not be the best foundation upon which to rest our privacy claims – by turning to non-legal understandings of norm development, degeneration, and mutation.
Paper can be downloaded here.