German Deep Packet Inspection (DPI) manufacturer, ipoque, has produced a white paper titled “Deep Packet Inspection: Technology, Applications & Network Neutrality.” In it, the company distinguishes between DPI as a technology and possible applications of the technology in a social environment. After this discussion they provide a differentiated ‘tiering’ of various bandwidth management impacts on network neutrality. In this post I offer a summary and comment of the white paper, and ultimately wonder whether or not there is an effective theoretical model, grounded in empirical study, to frame or characterize network neutrality advocates.
The first thing that ipoque does is try and deflate the typically heard ‘DPI analysis = opening a sealed envelop’ analogy, and argue that it is better to see packets as postcards, where DPI analysis involves looking for particular keywords or characters. In this analysis, because the technology cannot know of the meaning of what is being searched for, the DPI appliances cannot be said to violate one’s privacy given the technology’s lack of contextual awareness. I’ve made a similar kind of argument, that contextual meaning escapes DPI appliances (though along different lines) in a paper that I presented earlier this year titled “Moving Across the Internet: Code-Bodies, Code-Corpses, and Network Architecture,” though I think that its important to recognize a difference between a machine understandingsomething itself versus flagging particular words and symbols for a human operator to review. Ubiquitous, “non-aware,” machine surveillance can have very real effects where a human is alerted to communications – its something of a misnomer to say that privacy isn’t infringed simply because the machine doesn’t know what it’s doing. We ban and regulate all kinds of technologies because of what they can be used for rather than because the technology itself is inherently bad (e.g. wiretaps).
DPI as a technology has a variety of roles in network infrastructures; ipoque is concerned with bandwidth management and as such sees the need for DPI because it facilitates functioning networks. By looking at 1-3 packets (when traffic in unencrypted) or 3-20 (when traffic is encrypted) network operators can identify applications that are generating or receiving data traffic, apply rule-sets as desired/needed, and provide better understandings usage patterns to facilitate intelligent building-out of network capacity according to subscriber habits and demands. Though the focus of the white paper is on bandwidth management, ipoque does recognize that DPI can be instrumental in the regulation of netspace. They offer a few examples of possible DPI-related manifestations of such regulation:
- block encryption and tunnelling systems that prevent lawful intercept
- block unregulated telephony
- block illegal content
The first and second items, in particular, are worrisome. This said, Diffie and Landau recognize that even encrypted traffic can be useful in intelligence operations – often more useful than content analysis alone. The American spat over Google Voice and the iPhone may begin to set precedent in North America over whether carriers are ever permitted to block particular telephony applications. Of course, once DPI appliances are set into network topologies it is possible/likely that intelligence services will want access to the equipment for their own purposes. Rather than any of the aforementioned debates being issues for DPI as a technology, ipoque argues that they are issues of the governance of technology – a fear/concern of the latter should not prevent the whoesale deployment of the technology.
ipoque perceives regulation as necessary for a healthy broadband environment; rather than adopting a situation where users determine the priority of applications, or a volume based system that would stunt innovation, governments should ensure a competitive marketplace among service providers to give subscribers the greatest breadth of choices. Rather than offering a binary ‘DPI is good/bad for network neutrality and the market’, a seven-layer model is offered to understand how DPI can integrate with varying modes of traffic analysis and interference, and the impacts of each model on the network neutrality discussion.
- Best Effort Service (no DPI required)
- Per-User Bandwidth Fairness (no DPI required – ensures that all users get about equal share of available bandwidth and has absolutely no detected cons for net neutrality)
- User-Configurable Disabling of Selected Applications (requires DPI, but could enable subscribers to disable applications that could generate copyright infringement suits, etc)
- Application-aware Congestion Management (offers superior congestion protections than prior models and well as enhanced Quality of Service, but results in some applications getting less bandwidth while requiring DPI)
- Tiered Services and Pricing (sees a set of services, such as: (a) very cheap or ad-financed Web-only service; (b) cheap service that excludes certain high-bandwidth applications; (c) the features of (b) plus letting subscribers enable excluded services for a one-time fee via a portal; (d) more expensive all-inclusive service; (e) expensive service with business-level QoS guarantees. Would require DPI)
- Quality of Service Guarantees for Provider Services (has high potentials for misuse, and thus requires regulation. May require DPI)
- Revenue Protection and Generation (has severe privacy implications as ISPs monetize data traffic through DPI-facilitated analysis, and requires special purpose DPI equipment)
The key element of ipoque’s white paper is really that the applications of the technology, as opposed to the technology itself, need to be regulated/demonized. This is a fair position for any technology vendor to assume; DPI has the potential to be useful in general network operations, to enhance subscriber security, and so forth. The blatant call for regulation of particular applications of DPI is a good precedent to set, and also lets ipoque distance themselves from someone more opaque DPI vendors and ISPs. At issue, however, is that the company tries to pass a lot off to ‘society’ to address – I would suggest (following from a little bit of research done on civil advocacy groups) that advocates in civil society tend to be woefully underfunded, understaffed, and (somewhat generally) under resourced. This isn’t to say that civil actors can’t, or aren’t, effective in mobilizing – the CRTC hearing in Canada is a good example that members of civil society and business alike can, and do, come together over certain applications of DPI technologies.
Rather than passing the buck off to civil society, it would be delightful to see more companies like ipoque get very publicly involved in the international regulatory processes about DPI. Admittedly, the majority of ipoque’s business is in the EU and Africa, with limited penetration in North or South America, but as a leading vendor their voices in regulatory hearings would be welcome. Somewhat interestingly, in Canada many advocates opposed to DPI are motivated by opaque ISPs’ applications of the technology and are not necessarily opposed to the technology itself. This squares with ipoque’s desire that there be a delimitation between the technology and its potentials/applications. I would suggest that it is a consequence of ISP-focused mistrust that DPI has become the battleground topic that it is today.
While ipoque admits that intelligence agencies will want to get their hands on DPI equipment once installed in network topologies, the company insists that regulation can offer the solution to surveillance that ‘society’ finds offensive. The issues with all regulatory injunctions, especially as it pertains to equipment that can perform surveillance, is that function creep and dismantling regulatory policies are common occurrences. As a result, some privacy advocates, such as the advocate-activist, adopt the most strident positions possible that would maximally secure privacy rights and expectations now and in the future. Such advocate-activists know that other parties will make more ‘pragmatic’ or ‘pro-surveillance’ arguments. These latter two groups tend to be better resourced that the activists; the advocate-activist is there to strike the bell so that other parties and members of society are aware of the potentially severe effects of new surveillance technologies. Advocate-activists play an important role in the debate, and their effect is likely best understood when the pragmatic and pro-surveillance lobbies actively begin to dismiss their arguments; such dismissals demonstrate that the activists’ arguments are at least being heard.
As far as the proposed network neutrality typology, ipoque is offering a ‘pragmatic’ or cost/benefit approach to network neutrality. I expect that few activists in the domain of network neutrality would be pleased with anything other than a Best Effort or Per-User Bandwidth Fairness model (they would presumably take issue where the latter model led to speeds well below what their ISP advertised, but would not see that as an infringement of network neutrality principles but an issue of ISP provisioning policies). What doesn’t appear to be identified in ipoque’s typology is protocol agnostic throttling, where whenever there is a high degree of congestion on a link the heaviest users are throttled, regardless of the application(s) that are generating and receiving data. In essence, this targets that individual and not the application, and only temporarily until the congestion is cleared up. Such an agnostic model would seem to fit somewhere between the Per-User Bandwidth Fairness and Application-aware Congestion Management. Barring this, ipoque appears to have developed a good typography of the dominant models of bandwidth regulation that ISPs and advocates alike have put forth.
I would love to see some work put together that engages with network neutrality advocates in a manner that is similar to Bennett’s work on privacy advocates; while he offers what could become a generalized typography for advocates in digital spaces, it is derived from his empirical work in the privacy community in particular. Would a similar typography develop were one engaged in an empirical study of network neutrality proponents? I have assumed this would be the case without relying on an already existing argumentative structure that would support this assumption. An investigation into network neutrality advocates might reveal that the net neutrality crowd has a radically different composition than privacy advocates, and thus demand a very differently nuanced approach to frame and understand the discourse generated by DPI manufacturers, governments, members of civil society, business, and other agents in the debate. Born of this, we could situate various actors in the communicative network and comparatively evaluate the merits and challenges of various positions, and potentially reveal a very different topology of network neutrality advocates and the various understandings of what network neutrality itself represents. It’s possible (however unlikely I find it) that the models ipoque presents are just commonly presented, rather than authentic, representations of the net neutrality debate – only further research can confirm whether or not ipoque has authentically (on a best-effort basis) characterized the net neutrality models circulating policy rooms and public debate.
(My thanks to Kristin Wolf at ipoque for alerting me to this white paper’s publication as well as for her hand in shifting the white paper from behind a registration wall into the publicly available Resources/White Papers section of ipoque’s website.)
10 thoughts on “Analysis: ipoque, DPI, and Network Neutrality”
What doesn’t appear to be identified in ipoque’s typology is protocol agnostic throttling, where whenever there is a high degree of congestion on a link the heaviest users are throttled, regardless of the application(s) that are generating and receiving data.
I think I can answer this for you. When shaping traffic, DPI appliances don’t analyze historical flows. So the back history of a users usage isn’t readily available. Each application of a policy is done nanosecond by nanosecond so a full understanding of the overall bandwidth usage of a user is not available. That would require more caching and analysis than most/all? units currently provide.
Also, traffic managers are usually concerned about the responsiveness of an application. As iPoque pointed out, various applications have different needs (latency/number of connections/overall data transfer rates or some combination of each) that affect an applications serviceability. Which is why priority by application (VOIP followed by streaming TV followed by etc.) is really the best approach. Application agnostic shaping of traffic does no good if all you can hear over your Skype connection is … occasi… word…. fro… the other part…
Allowing latency sensitive applications a higher priority does not usually affect high bandwidth transfers like FTP/HTTP or P2P. They will still get most of the circuit most of the time, they just have to pull over to the slow lane while the high priority VOIP traffic zips through.
I liked that white paper BTW, it echoes many of things I’ve written at my blog. The sign of intelligence is how much someone agrees with you and all that! 😉
Sorry for the last response – I’ve largely secluded myself from the world while I prepare for some major exams that are coming up.
To clarify what I mean, but ‘heaviest users’ I wasn’t referring to a historical period, but ‘heavy at the time of congestion’. Thus, if you historically only tend to use a very small portion of your bandwidth, but at a moment of congestion happen to be using a significant share of the available bandwidth through that node then you would be throttled. This has the advantage of (likely) impeding the historically heavy bandwidth users on a more regular basis than those who very rarely engage in activity that would generate congestion.
What is certainly an issue is that there is typically a need to distinguish between service types (e.g. Skype versus STMP traffic). This is certainly something that a pure agnostic position would have issues with, though potentially what could instead be done is this: after classifying particular subtypes of traffic (e.g. real-time, bulk, etc) then throttling is applied to a particular subtype of traffic in an agnostic fashion. For example, if P2P is in the ‘bulk’ section (and thus identified as delayable) then not just P2P but ALL protocols in the ‘bulk’ section are throttled. In this sense, it would be protocol agnostic within traffic subtypes.
I generally like iPoque’s whitepapers; the people behind them do good work and engage with issues from a variety of positions and take then seriously. It’s quite nice change from an industry that is typically shrouded in smoke and mirrors grin
Oh no problem on the delay. Obviously I keep checking back….
I’m not sure about iPoque, but my experience with Packeteer makes classifying individual user streams difficult. This gets into the policy vs. partition type of classification. You use a classification to group a bunch of protocols together (real-time, bulk, etc.) This is the simplest and fastest classification or management scheme. It requires very little overhead on part of the DPI appliance, minimizing delay to processing while keeping overall memory/CPU usage low on the device.
Tracking connections per user (which you would have to do on a agnostic per user basis) requires a per user policy. Every connection is stored in memory for classification. This requires a crap load of CPU and memory on the DPI appliance. More connections and more users and the harder the appliance works to keep up.
This is why I’ve been defending Bell on the way it handled BitTorrent issues. They found (according to the CRTC submission) that BitTorrent traffic in particular was congesting or over subscribing their links. They did exactly what I would have done. Setup a partition (probably a low priority with a rate cap).
Its quick/efficient and gets the job done without overloading equipment. During over subscription periods, the BitTorrent users would have been fighting it out with each other, letting the main channel serve more desirable traffic. As more Torrent users come online they only degrade each others service, not all traffic so people can still e-mail, pay bills, and do “normal” Internet activity.
EDIT: You use a PARTITION classification to group a bunch of protocols together (real-time, bulk, etc.)
Comcast, in the US, is using a protocol agnostic system though (http://www.dslreports.com/shownews/New-Comcast-Throttling-System-Should-Be-100-Online-100015) – admittedly what I’m suggesting seems a tad more ambitious, but it’s something along the lines of what Comcast is doing that I’m most interested in. Ideally, rather than just targets high usage (no matter the congestion) it would be better targeted toward congested nodes. This would preserve ‘burst’ data transfers (e.g. web pages) while result in delaying traffic that tried to utilize ‘burst’ speeds longterm.
Many thanks for the distinction between policy and partition; I remember reading about it some time ago in the Heavy Ready DPI report prepared for the CRTC hearing, but had forgotten about it.
Question: Have you ever looked at a management interface for a DPI/shaping device?
I could webinar with you on our setup if you want an overview.
I’ll be honest – all of my information is from talking with people in the know/reading. I’ve never actually looked at a management interface save through screenshots. It’d be great to webinar to get an overview of things!
Cool. Then we’ll have to try to arrange a few hours where I can show you my setup, how it works/why it works and you can fire away with questions that I will try to answer as well as I can. You still have my work e-mail right? Let me know your general availability over the next few weeks and I’ll see where I can fit you in.
kindly send me the comparesion between diffrent DPI devices
I don’t presently have a piece by piece analysis of DPI equipment written – you can look at various corporate whitepapers to see how the devices are distinguished (though that will leave you with some corporate spin, that’s for sure!). The best place for a comparison is at Internet Evolution, which has two evaluations of DPI equipment in a side-to-side comparison.
Comments are closed.