Red en Defensa de los Derechos Digitales (R3D) has released a report that compares Mexican ISPs’ transparency and privacy practices. The work parallels the Karisma Foundation’s report about Columbian ISPs’ transparency and privacy practices; both the Mexican and Columbian organizations’ reports are based on the Electronic Frontier Foundation’s “Who Has Your Back” reporting format. The format is designed to visually summarize the practices taken by Internet companies so that end-users can easily evaluate how companies protect their users.
This post briefly summarizes R3D’s findings and then proceeds to discuss whether Mexican companies’ transparency report genuinely enable corporate accountability. Based on academic literatures, a strong argument can be made that the aggregated Mexican transparency report that have been issued by the Mexican telecommunications companies does not make the companies particularly accountable to their customers. The post concludes by raising questions about the status of third-party comparisons of corporate privacy and transparency practices: why are intermediaries like R3D, Karisma Foundation, Electronic Frontier Foundation, or IX Maps so important? And what are the deficits of contemporary comparisons of corporate transparency and privacy practices?
Summary of R3D Findings
RD3’s report examines privacy policies and codes of practices from the eight Mexican telecommunications companies that, in aggregate, compose 98% of Mexico’s mobile, fixed line, and broadband markets. Out of a possible six ‘stars’ only one company (Movistar) received two stars (the most of any company); half for requiring a warrant for data requests, half for publishing a transparency report, and a full star for advocating for privacy. The worst company, Megacable, received just a half-star for requiring a warrant for data requests.
Companies could receive either a full star, half-star, quarter-star or no star in each of the categories that are noted in Figure One. The evaluation criteria for receiving these grades follows the figure.
I’ve been watching with some interest the new Artist 2 Fan 2 Artist project, recently started up by Jon Newton and Billy Bragg. The intent of the site is to bring artists and fans together and encourage these parties to speak directly with one another, without needing to pass through intermediaries such as producers, labels, public relations groups, managers, and so on. It will be interesting to see how the dialogue develops.
One of the key elements of the site that interest me the discussion of paying artists (and other content creators); how can we avoid demonizing P2P users while at the same time allocating funds to artists/copyright owners in a responsible manner. On October 5th, this topic was broached under the posting ‘In Favour of a Music Tax‘, and I wanted to bring some of my own comments surrounding the idea of a music tax to the forefront of my own writing space, and the audience here.
I think that an ISP-focused levy system is inappropriate for several reasons: it puts too much authority and control over content analysis than carriers need, puts carriers at risk when they misidentify content, and would make carriers (for-profit content delivery corporations) in charge of monitoring content without demanding consumers that pay ‘full value’ for content moving through their networks. This last point indicates that an ISP-based levy puts ISPs in a conflict of interest (at least in the case of the dominant ISPs in Canada). Another solution is required.
German Deep Packet Inspection (DPI) manufacturer, ipoque, has produced a white paper titled “Deep Packet Inspection: Technology, Applications & Network Neutrality.” In it, the company distinguishes between DPI as a technology and possible applications of the technology in a social environment. After this discussion they provide a differentiated ‘tiering’ of various bandwidth management impacts on network neutrality. In this post I offer a summary and comment of the white paper, and ultimately wonder whether or not there is an effective theoretical model, grounded in empirical study, to frame or characterize network neutrality advocates.
The first thing that ipoque does is try and deflate the typically heard ‘DPI analysis = opening a sealed envelop’ analogy, and argue that it is better to see packets as postcards, where DPI analysis involves looking for particular keywords or characters. In this analysis, because the technology cannot know of the meaning of what is being searched for, the DPI appliances cannot be said to violate one’s privacy given the technology’s lack of contextual awareness. I’ve made a similar kind of argument, that contextual meaning escapes DPI appliances (though along different lines) in a paper that I presented earlier this year titled “Moving Across the Internet: Code-Bodies, Code-Corpses, and Network Architecture,” though I think that its important to recognize a difference between a machine understandingsomething itself versus flagging particular words and symbols for a human operator to review. Ubiquitous, “non-aware,” machine surveillance can have very real effects where a human is alerted to communications – its something of a misnomer to say that privacy isn’t infringed simply because the machine doesn’t know what it’s doing. We ban and regulate all kinds of technologies because of what they can be used for rather than because the technology itself is inherently bad (e.g. wiretaps).
In the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:
- They can continue to throttle ‘problem’ applications in the future;
- The CRTC decides to leave the wireless market alone right now.
I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.
Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type: