Deep Packet Inspection: The Good, the Bad, and the Ugly

goodbaduglyIn this post, I want to try to lay out where I see some of the Deep Packet Inspection (DPI) discussions. This is to clarify things in my head that I’ve been thinking through for the past couple of days and to lay out for readers some of the ‘bigger picture’ elements of the DPI discussion (as I read them). If you’ve been fervently following developments surrounding this technology, then a lot of what is below is just rehashing what you know – hopefully the summary is useful – but if you’re relatively unfamiliar with what’s been going on this might help to orient what’s been, and is being, said.

Participants and Themes

The uses of DPI appliances are regularly under fire by network neutrality advocates, privacy advocates, and people who are generally concerned about communication infrastructure. DPI lets network operators ‘penetrate’ data packets that are routed through their networks and this practice is ‘new’, insofar as prior networking appliances were generally prevented from inspecting the actual payload, or content, of the data packets that are shuttled across the ‘net. To make this a bit clearer, when you send email it is broken into a host of little packets that are reassembled at the destination; earlier networking appliances could determine the destination, the kind of file being transmitted (e.g. a .mov or .jpeg), and so forth but they couldn’t accurately identify what content was in the packet (e.g. the characters of an email message held within a packet). Using DPI, network operators can now (in theory) configure their DPI appliances to capture the actions that users perform online and ‘see’ what they are doing in real time.

Continue reading

Thoughts: P2P, PET+, and Privacy Literature

p2pwindowPeer-to-peer (P2P) technologies are not new and are unlikely to disappear anytime soon. While I’m tempted to talk about the Pirate’s Bay, or ‘the Pirate Google‘ in the context of P2P and privacy, other people have discussed these topics exceptionally well, and at length. No, I want to talk (in a limited sense) about the code of P2P and how these technologies are (accidentally) used to reflect on what privacy literature might offer to the debate concerning the regulation of P2P programs.

I’ll begin with code and P2P. In the US there have been sporadic discussions in Congress that P2P companies need to alter their UIs and make it more evident what individuals are, and are not, sharing on the ‘net when they run these programs. Mathew Lasar at Ars Technica has noted that Congress is interested in cutting down on what is termed ‘inadvertent sharing’ – effectively, members of Congress recognize that individuals have accidentally shared sensitive information using P2P applications, and want P2P vendors to design their programs in a way that will limit accidental sharing of personal/private information. Somewhat damningly, the United States Patent and Trademark Office declared in 2006 that P2P applications were “uniquely dangerous,” and capable of causing users “to share inadvertently not only infringing files, but also sensitive personal files like tax returns, financial records, and documents containing private or even classified data” (Source).

Continue reading

Facial Blurring = Securing Individual Privacy?

Google map privacy?The above image was taken by a Google Streetcar. As is evident, all of the faces in the picture have been blurred in accordance with Google’s anonymization policy. I think that the image nicely works as a lightning rod to capture some of the criticisms and questions that have been arisen around Streetview:

  1. Does the Streetview image-taking process itself, generally, constitute a privacy violation of some sort?
  2. Are individuals’ privacy secured by just blurring faces?
  3. Is this woman’s privacy being violated/infringed upon in so way as a result of having her photo taken?

Google’s response is, no doubt, that individuals who feel that an image is inappropriate can contact the company and they will take the image offline. The problem is that this puts the onus on individuals, though we  might be willing to affirm that Google recognizes photographic privacy as a social value, insofar as any member of society who sees this as a privacy infringement/violation can also ask Google to remove the image. Still, even in the latter case this ‘outsources’ privacy to the community and is a reactive, rather than a proactive, way to limit privacy invasions (if, in fact, the image above constitutes an ‘invasion’). Regardless of whether we want to see privacy as an individual or social value (or, better, as valuable both for individuals and society) we can perhaps more simply ponder whether blurring the face alone is enough to secure individuals’ privacy. Is anonymization the same as securing privacy?

Continue reading

Draft: Code-Bodies and Algorithmic Voyeurism

Surveillance_timestampsI’ve recently been reading some of David Lyon’s work, and his idea of developing an ethic of voyeurism has managed to intrigue me. I don’t think that I necessarily agree with his position in its entirety, but I think that it’s an interesting position. This paper, entitled “Code-Bodies and Algorithmic Surveillance: Examining the impacts of encryption, rights of publicity, and code-specters,” is an effort to think through how voyeurism might be understood in the context of Deep Packet Inspection using the theoretical lenses of Kant and Derrida. This paper is certainly more ‘theoretical’ than the working paper that I’ve previously put together on DPI, but builds on that paper’s technical discussion of DPI to think about surveillance, voyeurism, and privacy.

As always, I welcome positive, negative, and ambivalent comments on the draft. Elements of it will be adopted for a paper that I’ll be presenting at a Critical Digital Studies workshop in a month or two – this is your chance to get me to reform positions to align with your own! *grin*

Facebook Fights Search Engines Over Copyright

DarkKnightPirateBayThe problem with walled gardens such as Facebook, is that you can be searched whenever you pass through their blue gates. In the course of being searched, undesired data can be refused – data like links to ‘abusive’ sites that facilitate copyright infringement. As of today, Facebook has declared war on the Pirates Bay, maintaining that because links to the site often infringe on someone’s copyright then linking to it violates the terms of service that Facebook users agree to. Given that the Pirates Bay is just a particularly specialized search engine, it would seem that Facebook is now going to start applying (American?) ethical and moral judgements on what people use to search for data. Sharing data is great, but only so long as it’s the ‘right kind’ of data.

What constitutes ‘infringing’ use when talking about a search engine? Google, as an example, lets individuals quickly and easily find torrent files that can subsequently be used to download/upload infringing material. The specific case being made against the Pirate Bay is that:

“Facebook respects copyrights and our Terms of Service prohibits placement of ‘Share on Facebook’ links on sites that contain “any content that is infringing. Given the controversy surrounding The Pirate Bay and the pending lawsuit against them, we’ve reached out to The Pirate Bay and asked them to remove the ‘Share on Facebook’ links from their site. The Pirate Bay has not responded and so we have blocked their torrents from being shared on Facebook.” (Source)

Continue reading

Analysis: ipoque, DPI, and bandwidth management

Bandwidth-exceededIn 2008, ipoque released a report titled “Bandwidth Management Solutions for Network Operators“. Using Deep Packet Inspection appliances, it is possible to establish a priority management system that privileges certain applications’ traffic over others; VoIP traffic can be dropped last, whereas P2P packets are given the lowest priority on the network. Two  modes of management are proposed by ipoque:

  1. Advanced Priority Management: where multi-tiered priorities maintain Quality of Experience (rather than Service) by identifying some packet-types as more important than others (e.g. VoIP is more important than BitTorrent packets). Under this system, less important packets are only dropped as needed, rather than being dropped once a bandwidth cap is met.
  2. Tiered Service Model: This uses a volume-service system, where users can purchase so much bandwidth for particular services. This is the ‘cell-phone’ model, where you sign up for packages that give you certain things and if you exceed your package limitations extra charges may apply*. Under this model you might pay for a file-sharing option, as well as a VoIP and/or streaming HTTP bundle.

The danger with filtering by application (from ipoque’s position) is that while local laws can be enforced, it  opens the ISP to dissatisfaction if legitimate websites are blocked. Thus, while an ISP might block Mininova, they can’t block Fedora repositories as well – the first might conform to local laws, whereas blocking the second would infringe on consumers’ freedoms. In light of this challenge, ipoque suggests that could ISPs adopt Saudi Arabia-like white-lists, where consumers can send a message to their ISP when they find sites being illegitimately blocked. Once the ISP checks out the site, they can either remove the site from the black-list, or inform the customer of why the site must remain listed.

Continue reading