The Offensive Internet: Speech, Privacy, and Reputation is an essential addition to academic, legal, and professional literatures on the prospective harms raised by Web 2.0 and social networking sites more specifically. Levmore and Nussbaum (eds.) have drawn together high profile legal scholars, philosophers, and lawyers to trace the dimensions of how the Internet can cause harm, with a focus on the United States’ legal code to understand what enables harm and how to mitigate harm in the future. The editors have divided the book into four sections – ‘The Internet and Its Problems’, ‘Reputation’, ‘Speech’, and ‘Privacy’ – and included a total of thirteen contributions. On the whole, the collection is strong (even if I happen to disagree with many of the policy and legal changes that many authors call for).
In this review I want to cover the particularly notable elements of the book and then move to a meta-critique of the book. Specifically, I critique how some authors perceive the Internet as an ‘extra’ that lacks significant difference from earlier modes of disseminating information, as well as the position that the Internet is a somehow a less real/authentic environment for people to work, play, and communicate within. If you read no further, leave with this: this is an excellent, well crafted, edited volume and I highly recommend it.
I learned today that I was successful in winning a Social Sciences and Human Research Council (SSHRC) award. (Edit September 2009: I’ve been upgraded to a Joseph Armand Bombardier Canada Graduate Scholarship). Given how difficult I found it to find successful research statements (save for through personal contacts) I wanted to post my own statement for others to look at (as well as download if they so choose). Since writing the below statement, some of my thoughts on DPI have become more nuanced, and I’ll be interested in reflecting on how ethics might relate to surveillance/privacy practices. Comments and ideas are, of course, welcomed.
Interrogating Internet Service Provider Surveillance:
Deep Packet Inspection and the Confluence of International Privacy Regimes
Context and Research Question
Internet Service Providers (ISPs) are ideally situated to survey data traffic because all traffic to and from the Internet must pass through their networks. Using sophisticated data traffic monitoring technologies, these companies investigate and capture the content of unencrypted digital communications (e.g. MSN messages and e-mail). Despite their role as the digital era’s gatekeepers, very little work has been done in the social sciences to examine the relationship between the surveillance technologies that ISPs use to survey data flows and the regional privacy regulations that adjudicate permissible degrees of ISP surveillance. With my seven years of employment in the field of Information Technology (the last several in network operations), and my strong background in conceptions of privacy and their empirical realization from my master’s degree in philosophy and current doctoral work in political science, I am unusually well-suited suited to investigate this relationship. I will bring this background to bear when answering the following interlinked questions in my dissertation: What are the modes and conditions of ISP surveillance in the privacy regimes of Canada, the US, and European Union (EU)? Do common policy structures across these privacy regimes engender common realizations of ISP surveillance techniques and practices, or do regional privacy regulations pertaining to DPI technologies preclude any such harmonization?
[Note: this is an early draft of a section of a paper I’m working on titled ‘Who Gives a Tweet about Privacy’. Other sections will follow as I draft them.]
Unauthorized Capture and Transmission of Data
Almost every cellular phone that is now sold has a camera of some sort embedded into it. The potential for individuals to capture and transmit our image without permission has become a common fact of contemporary Western life, but this has not always been the case. When Polaroid cameras were new and first used to capture images of indiscretions for gossip columns, Warren and Brandeis wrote an article asserting that the unauthorized capture and transmission of photos and gossip constituted a privacy violation. Such transmissions threatened to destroy “at once robustness of thought and delicacy of feeling. No enthusiasm can flourish, no generous impulse can survive under [gossip’s] blighting influence” (Warren and Brandeis 1984: 77). Individuals must be able to expect that certain matters will be kept private, even when acting in public spaces – they have a right to be let alone – or else society will reverse its progress towards civilization.
There has been a sustained argument across the ‘net and in traditional circles, that privacy is being redefined before our very eyes. Oftentimes, we see how a word transforms by studying its etymology – this is helpful in understanding the basis of the words that we utter. What do we do, however, when we work to redefine not just a word’s definition (such as what the term ‘cool’ refers to) but its normative horizons?
In redefining the work ‘privacy’ to account for how people are empirically protecting their privacy, are we redefining the word, or the normative horizon that it captures? Moreover, can we genuinely assume that the term’s normative guide is changing simply because of recent rapid changes in technology increase the difficulty in exercising our right to privacy in digitized environments? To argue that these normative boundaries are shifting largely because of how digital networks have been programmed presupposes that the networks cannot be designed in any other way, that digital content will flow as it does now the same way that gravity acts on our physical bodies as it presently does. The difficulty in maintaining such an analogy is that it assumes that there are natural laws to an immanent programming languages that structure how we can participate in digital environments.
In recent months more and more attention has been directed towards Google’s data retention policies. In May of 2007 Peter Fleishcher of Google’s global privacy counsel established three key reasons for why his company had to maintain search records:
To improve their services. Specifically, he writes “Search companies like Google are constantly trying to improve the quality of their search services. Analyzing logs data is an important tool to help our engineers refine search quality and build helpful new services . . . The ability of a search company to continue to improve its services is essential, and represents a normal and expected use of such data.”
To maintain security and prevent fraud and abuse. “Data protection laws around the world require Internet companies to maintain adequate security measures to protect the personal data of their users. Immediate deletion of IP addresses from our logs would make our systems more vulnerable to security attacks, putting the personal data of our users at greater risk. Historical logs information can also be a useful tool to help us detect and prevent phishing, scripting attacks, and spam, including query click spam and ads click spam.”
To comply with legal obligations to retrieve data. “Search companies like Google are also subject to laws that sometimes conflict with data protection regulations, like data retention for law enforcement purposes.” (Source
I’ve recently had the pleasure of reading some of Foucault’s Society Must be Defended. Over the course of the book Foucault will be radically changing his early positions, and I hope to note and discuss these changes as I come across them. This said, I’ve recently finished the first lecture and wanted to reflect on the power of genealogies, the fragmented character of the ‘net, and synthesize that with Wu and Goldsmith’s account of the Internet and Foucault’s own thoughts on power as repression. There’s a lot to do, but I think that it might be very profitable to at least toy around with this for a bit.
There is a tendency to try and capture knowledge in unitary architectures. Foucault equates this to trying to develop a unifying concept to explain the behaviour of each droplet of water that explodes from around a sperm whale when it breeches. In the very process of establishing a complex formula to receive this information, the act itself is lost.