The Offensive Internet: Speech, Privacy, and Reputation is an essential addition to academic, legal, and professional literatures on the prospective harms raised by Web 2.0 and social networking sites more specifically. Levmore and Nussbaum (eds.) have drawn together high profile legal scholars, philosophers, and lawyers to trace the dimensions of how the Internet can cause harm, with a focus on the United States’ legal code to understand what enables harm and how to mitigate harm in the future. The editors have divided the book into four sections – ‘The Internet and Its Problems’, ‘Reputation’, ‘Speech’, and ‘Privacy’ – and included a total of thirteen contributions. On the whole, the collection is strong (even if I happen to disagree with many of the policy and legal changes that many authors call for).
In this review I want to cover the particularly notable elements of the book and then move to a meta-critique of the book. Specifically, I critique how some authors perceive the Internet as an ‘extra’ that lacks significant difference from earlier modes of disseminating information, as well as the position that the Internet is a somehow a less real/authentic environment for people to work, play, and communicate within. If you read no further, leave with this: this is an excellent, well crafted, edited volume and I highly recommend it.
I learned today that I was successful in winning a Social Sciences and Human Research Council (SSHRC) award. (Edit September 2009: I’ve been upgraded to a Joseph Armand Bombardier Canada Graduate Scholarship). Given how difficult I found it to find successful research statements (save for through personal contacts) I wanted to post my own statement for others to look at (as well as download if they so choose). Since writing the below statement, some of my thoughts on DPI have become more nuanced, and I’ll be interested in reflecting on how ethics might relate to surveillance/privacy practices. Comments and ideas are, of course, welcomed.
Interrogating Internet Service Provider Surveillance:
Deep Packet Inspection and the Confluence of International Privacy Regimes
Context and Research Question
Internet Service Providers (ISPs) are ideally situated to survey data traffic because all traffic to and from the Internet must pass through their networks. Using sophisticated data traffic monitoring technologies, these companies investigate and capture the content of unencrypted digital communications (e.g. MSN messages and e-mail). Despite their role as the digital era’s gatekeepers, very little work has been done in the social sciences to examine the relationship between the surveillance technologies that ISPs use to survey data flows and the regional privacy regulations that adjudicate permissible degrees of ISP surveillance. With my seven years of employment in the field of Information Technology (the last several in network operations), and my strong background in conceptions of privacy and their empirical realization from my master’s degree in philosophy and current doctoral work in political science, I am unusually well-suited suited to investigate this relationship. I will bring this background to bear when answering the following interlinked questions in my dissertation: What are the modes and conditions of ISP surveillance in the privacy regimes of Canada, the US, and European Union (EU)? Do common policy structures across these privacy regimes engender common realizations of ISP surveillance techniques and practices, or do regional privacy regulations pertaining to DPI technologies preclude any such harmonization?
[Note: this is an early draft of a section of a paper I’m working on titled ‘Who Gives a Tweet about Privacy’. Other sections will follow as I draft them.]
Unauthorized Capture and Transmission of Data
Almost every cellular phone that is now sold has a camera of some sort embedded into it. The potential for individuals to capture and transmit our image without permission has become a common fact of contemporary Western life, but this has not always been the case. When Polaroid cameras were new and first used to capture images of indiscretions for gossip columns, Warren and Brandeis wrote an article asserting that the unauthorized capture and transmission of photos and gossip constituted a privacy violation. Such transmissions threatened to destroy “at once robustness of thought and delicacy of feeling. No enthusiasm can flourish, no generous impulse can survive under [gossip’s] blighting influence” (Warren and Brandeis 1984: 77). Individuals must be able to expect that certain matters will be kept private, even when acting in public spaces – they have a right to be let alone – or else society will reverse its progress towards civilization.
There has been a sustained argument across the ‘net and in traditional circles, that privacy is being redefined before our very eyes. Oftentimes, we see how a word transforms by studying its etymology – this is helpful in understanding the basis of the words that we utter. What do we do, however, when we work to redefine not just a word’s definition (such as what the term ‘cool’ refers to) but its normative horizons?
In redefining the work ‘privacy’ to account for how people are empirically protecting their privacy, are we redefining the word, or the normative horizon that it captures? Moreover, can we genuinely assume that the term’s normative guide is changing simply because of recent rapid changes in technology increase the difficulty in exercising our right to privacy in digitized environments? To argue that these normative boundaries are shifting largely because of how digital networks have been programmed presupposes that the networks cannot be designed in any other way, that digital content will flow as it does now the same way that gravity acts on our physical bodies as it presently does. The difficulty in maintaining such an analogy is that it assumes that there are natural laws to an immanent programming languages that structure how we can participate in digital environments.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).