All sorts of nasty things as said about ISPs that use Deep Packet Inspection (DPI). ISPs aren’t investing enough in their networks, they just want to punish early adopters of new technologies, they’re looking to deepen their regulatory powers capacities, or they want to track what their customers do online. ISPs, in turn, tend to insist that P2P applications are causing undue network congestion, and DPI is the only measure presently available to them to alleviate such congestion.
At the moment, the constant focus on P2P over the past few years has resulted in various ‘solutions’ including the development of P4P and the shift to UDP. Unfortunately, the cat and mouse game between groups representing record labels, ISPs (to a limited extent), and end-users has led to conflict that has ensured that most of the time and money is being put into ‘offensive’ and ‘defensive’ technologies and tactics online rather than more extensively into bandwidth-limiting technologies. Offensive technologies include those that enable mass analysis of data- and protocol-types to try and stop or delay particular modes of data sharing. While DPI can be factored into this set of technologies, a multitude of network technologies can just as easily fit into this category. ‘Defensive’ technologies include port randomizers, superior encryption and anonymity techniques, and other techniques that are primarily designed to evade particular analyses of network activity.
I should state up front that I don’t want to make myself out to be a technological determinist; neither ‘offensive’ or ‘defensive’ technologies are in a necessary causal relationship with one another. Many of the ‘offensive’ technologies could have been developed in light of increasingly nuanced viral attacks and spam barrages, to say nothing of the heightening complexity of intrusion attacks and pressures from the copyright lobbies. Similarly, encryption and anonymity technologies would have continued to develop, given that in many nations it is impossible to trust local ISPs or governments.
This stated, the focus on more extensive surveillance systems will lead to the development of new means of overcoming those same systems. Presuming that a core reason for using DPI is to mediate overwhelming amounts of data traffic, it might be that the throttling of that traffic is the exact opposite path that should be taken if ISPs are to resolve traffic congestion. By automatically assuming that particular protocol-types and applications should be throttled, there is a drive to override or evade the throttle. DPI becomes the enemy to be fought against, and congestion a (relative) non-issue.
Continuing to assume that data congestion is the core motivation for investing in DPI appliances, then tightening up data compression algorithms could help to mitigate congestion and alleviate the need for DPI ‘solutions. For developers to have an impetus to try to route traffic more efficiently at least two things must happen:
- End-users must be aware of the exact amount of bandwidth they use;
- End-users must know what is responsible for that bandwidth.
If throttling mechanisms were deployed as individuals hit particular bandwidth quota levels, as opposed to just massively throttling all content using particular content distribution systems, then developers would have a real reason to increasingly compress data because of user demand. Programs that ‘cost’ less bandwidth would be preferred over those that ‘cost’ more – a competition could actually occur in the marketplace.
The issue, of course, is that major ISPs have absolutely terrible quota management systems. This isn’t to say that they can’t track bandwidth consumed, but that such tracking efforts are in-house arrangements and thus should be viewed with suspicion. As I’ve previously written, trusted third parties should be responsible for actually creating AND ‘inspecting’ bandwidth monitoring systems. As end-users understand (and trust) the information that their ISP delivers to them about bandwidth usage, then pressure amongst savvy consumers can mount against application developers. Focusing on limiting application types, as opposed to getting consumers to exert their considerable market pressure on application developers, encourages developers to create newer techniques to evade offensive technology upgrades. Given that it just takes one person with a good idea to overcome most throttling and analysis techniques, then why not do everything possible to encourage that one person to have a good idea concerning application-specific data compression instead of working on ‘defensive’ technologies?