[Note: this is an early draft of the second section of a paper I’m working on titled ‘Who Gives a Tweet about Privacy’ and builds from an earlier posted section titled ‘Privacy, Dignity, Copyright and Twitter‘ Other sections will follow as I draft them.]
Towards a Statutory Notion of Privacy
Whereas Warren and Brandeis explicitly built a tort claim to privacy (and can be read as implicitly laying the groundwork for a right to privacy), theorists such as Alan Westin attempt to justify a claim to privacy that would operate as the bedrock for a right to privacy. Spiros Simitis recognizes this claim, but argues that privacy should be read as both an individual and a social issue. The question that arises is whether or not these writers’ respective understandings of privacy capture the normative expectations of speaking in a public space, such as Twitter; do their understandings of intrusion/data capture recognize the complexities of speaking in public spaces and provide a reasonable expectation of privacy that reflects people’s interests to keep private some, but not all, of the discussions they have in public?
[I recently posted a version of this on another website, and thought that it might be useful to re-post here for readers. For a background on Deep Packet Inspection technologies, I’d refer you to this.]
There is a very real need for various parties who advocate against Deep Packet Inspection (DPI) to really work through what Packet Inspection appliances have done, historically, so that their arguments against DPI are as precise as possible. Packet Inspection isn’t new, and it’s not likely to be going away any time soon – perimeter defences for networks are essential for mitigating spam and viruses (and rely on Medium Packet Inspection).
I’m in no way an expert in the various discussions surrounding DPI (though I try to follow the network neutrality, privacy, and communications infrastructure debates), but I have put together a paper that attempts to clarify the lineage of DPI devices and (briefly) suggest that DPI can be understood as a surveillance tool that is different from prior packet inspection technologies. From a privacy perspective (which is where I sit in relation to the deployment of DPI), it’s important for privacy advocates to understand that approaching the issue from a principle-based approach is fraught with problems at legal, theoretical, and practical levels. The complexities of developing a principle-based approach is one of the reasons why many contemporary privacy scholars (myself included) have opted for a ‘problem-based’ approach to identifying privacy infringements. What, exactly, do most advocates mean when they say that their privacy is ‘violated’? I don’t think that a clear position comes out in the advocate position (maybe it does, and I’m just not aware of it) – they appear to allude to a fundamental right to privacy, while pointing to specific instances as ‘violations’ of that right. The worry with principled approaches is that they are challenged to fully capture what we mean when we say something is private, and equally challenged to capture contextualized social norms of privacy (e.g. streetview in the US versus Japan, bodily privacy in differing cultures, etc etc).
Peer-to-peer (P2P) technologies are not new and are unlikely to disappear anytime soon. While I’m tempted to talk about the Pirate’s Bay, or ‘the Pirate Google‘ in the context of P2P and privacy, other people have discussed these topics exceptionally well, and at length. No, I want to talk (in a limited sense) about the code of P2P and how these technologies are (accidentally) used to reflect on what privacy literature might offer to the debate concerning the regulation of P2P programs.
I’ll begin with code and P2P. In the US there have been sporadic discussions in Congress that P2P companies need to alter their UIs and make it more evident what individuals are, and are not, sharing on the ‘net when they run these programs. Mathew Lasar at Ars Technica has noted that Congress is interested in cutting down on what is termed ‘inadvertent sharing’ – effectively, members of Congress recognize that individuals have accidentally shared sensitive information using P2P applications, and want P2P vendors to design their programs in a way that will limit accidental sharing of personal/private information. Somewhat damningly, the United States Patent and Trademark Office declared in 2006 that P2P applications were “uniquely dangerous,” and capable of causing users “to share inadvertently not only infringing files, but also sensitive personal files like tax returns, financial records, and documents containing private or even classified data” (Source).
A few days ago I posted that Nova Scotia and New Brunswick both might be moving away from EDLs because of their costs and/or privacy issues. While the article discussed the issue was problematic (because of persistent factual errors), it appears as though the author was on target concerning New Brunswick’s concerns with the technology: EDLs will not be coming to my birthplace .
This means that both New Brunswick and Saskatchewan will not be going forward with EDLs, though Alberta. Quebec, Manitoba, Ontario, and B.C. are all going ahead with EDLs. I’ll be curious to see if the rest of the Atlantic provinces follow New Brunswick’s lead, and how this might shape the national discourse on EDLs.