Distinguishing Between Mobile Congestions

by Simon TunbridgeThere is an ongoing push to ‘better’ monetize the mobile marketplace. In this near-future market, wireless providers use DPI and other Quality of Service equipment to charge subscribers for each and every action they take online. The past few weeks have seen Sandvine and other vendors talk about this potential, and Rogers has begun testing the market to determine if mobile customers will pay for data prioritization. The prioritization of data is classified as a network neutrality issue proper, and one that demands careful consideration and examination.

In this post, I’m not talking about network neutrality. Instead, I’m going to talk about what supposedly drives prioritization schemes in Canada’s wireless marketplace: congestion. Consider this a repartee to the oft-touted position that ‘wireless is different’: ISPs assert that wireless is different than wireline for their own regulatory ends, but blur distinctions between the two when pitching ‘congestion management’ schemes to customers. In this post I suggest that the congestion faced by AT&T and other wireless providers has far less to do with data congestion than with signal congestion, and that carriers have to own responsibility for the latter.

Continue reading

Rogers, Network Failures, and Third-Party Oversight

Photo credit: Faramarz HashemiDeep packet inspection (DPI) is a form of network surveillance and control that will remain in Canadian networks for the foreseeable future. It operates by examining data packets, determining their likely application-of-origin, and then delaying, prioritizing, or otherwise mediating the content and delivery of the packets. Ostensibly, ISPs have inserted it into their network architectures to manage congestion, mitigate unprofitable capital investment, and enhance billing regimes. These same companies routinely run tests of DPI systems to better nuance the algorithmic identification and mediation of data packets. These tests are used to evaluate algorithmic enhancements of system productivity and efficiency at microlevels prior to rolling new policies out to the entire network.

Such tests are not publicly broadcast, nor are customers notified when ISPs update their DPI devices’ long-term policies. While notification must be provided to various bodies when material changes are made to the network, non-material changes can typically be deployed quietly. Few notice when a deployment of significant scale happens…unless it goes wrong. Based on user-reports in the DSLreports forums it appears that one of Rogers’ recent policy updates was poorly tested and then massively deployed. The ill effects of this deployment are still unresolved, over sixty days later.

In this post, I first detail issues facing Rogers customers, drawing heavily from forum threads at DSLreports. I then suggest that this incident demonstrates multiple failings around DPI governance: a failure to properly evaluate analysis and throttling policies; a failure to significantly acknowledge problems arising from DPI misconfiguration; a failure to proactively alleviate inconveniences of accidental throttling. Large ISPs’ abilities to modify data transit and discrimination conditions is problematic because it increases the risks faced by innovators and developers who cannot predict future data discrimination policies. Such increased risks threaten the overall generative nature of the ends of the Internet. To alleviate some of these risks a trusted third-party should be established. This party would monitor how ISPs themselves govern data traffic and alert citizens and regulators if ISPs discriminate against ‘non-problematic’ traffic types or violate their own terms of service. I ultimately suggest that an independent, though associated, branch of the CRTC that is responsible for watching over ISPs could improve trust between Canadians and the CRTC and between customers and their ISPs.

Continue reading

Review: Internet Architecture and Innovation

Internet_Architecture_and_Innovation_coverI want to very highly recommend Barbara van Schewick’s Internet Architecture and Innovation. Various authors, advocates, scholars, and businesses have spoken about the economic impacts of the Internet, but to date there hasn’t been a detailed economic accounting of what may happen if/when ISPs monitor and control the flow of data across their networks. van Schewick has filled this gap by examining “how changes in the Internet’s architecture (that is, its underlying technical structure) affect the economic environment for innovation” and evaluating “the impact of these changes from the perspective of public policy” (van Schewick 2010: 2).

Her book traces the economic consequences associated with changing the Internet’s structure from one enabling any innovator to design an application or share content online to a structure where ISPs must first authorize access to content and design key applications  in house (e.g. P2P, email, etc). Barbara draws heavily from Internet history literatures and economic theory to buttress her position that a closed or highly controlled Internet not only constitutes a massive change in the original architecture of the ‘net, but that this change would be damaging to society’s economic, cultural, and political interests. She argues that an increasingly controlled Internet is the future that many ISPs prefer, and supports this conclusion with economic theory and the historical actions of American telecommunications corporations.

van Schewick begins by outlining two notions of the end-to-end principle undergirding the ‘net, a narrow and broad conception, and argues (successfully, in my mind) that ISPs and their interrogators often rely on different end-to-end understandings in making their respective arguments to the public, regulators, and each other.

Continue reading

Lesson Drawing from the Telegraph

By David DuganIn the domain of telecom policy, it seems like a series of bad ideas (re)arise alongside major innovations in communications systems and technologies. In this post, I want to turn to the telegraph to shed light on issues of communication bandwidth, security and privacy that are being (re)addressed by regulators around the world as they grapple with the Internet. I’ll speak to the legacy of data retention in analogue and digital communicative infrastructures, congestion management, protocol development, and encryption policies to demonstrate how these issues have arisen in the past, and conclude by suggesting a few precautionary notes about the future of the Internet. I do want to acknowledge, before getting into the meat of this post, that while the telegraph can be usefully identified as a precursor to the digital Internet because of the strong analogies between the two technological systems it did use different technological scaffolding. Thus, lessons that are drawn are based on the analogical similarities, rather than technical homogeneity between the systems.

The Telegraph

The telegraph took years to develop. Standardization was a particular issues, perhaps best epitomized by the French having an early telegraph system of (effectively) high-tech signal towers, whereas other nations struggled to develop interoperable cross-continental electrically-based systems. Following the French communication innovation (which was largely used to coordinate military endeavours), inventors in other nations such as Britain and the United States spent considerable amounts of time learning how to send electrical pulses along various kinds of cables to communicate information at high speed across vast distances.

Continue reading

Forthcoming Talk at Social Media Club Vancouver

Head-On-VancouverI’ve been invited to talk to Vancouver’s vibrant Social Media Club on October 7! I’m thrilled to be presenting, and will be giving a related (though very different) talk from the one a few days earlier at Social Media Camp Victoria. Instead of making traffic analysis a focus, I’ll be speaking more broadly of what I’ll be referring to as a ‘malaise of privacy’. This general discomfort of moving around online is (I will suggest) significantly related to the opaque privacy laws and protections that supposedly secure individuals’ privacy online as contrasted against the daily reality of identity theft, data breaches, and so forth. The thrust will be to provide those in attendance with the theoretical background to develop their own ethic(s) of privacy to make legal privacy statements more accessible and understandable.

See below for the full abstract:

Supplementing Privacy Policies with a Privacy Ethic

Social media platforms are increasingly common (and often cognitively invisible) facets of Western citizens’ lives; we post photos to Facebook and Flickr, engage in conversations on Orkut and Twitter, and relax by playing games on Zynga and Blizzard infrastructures. The shift to the Internet as a platform for mass real-time socialization and service provision demands a tremendous amount of trust on the part of citizens, and research indicates that citizens are increasingly concerned about whether their trust is well placed. Analytics, behavioural advertising, identity theft, and data mismanagements strain the public’s belief that digital systems are ‘privacy neutral’ whilst remaining worried about technological determinisms purported to drive socialized infrastructures.

For this presentation, I begin by briefly reviewing the continuum of the social web, touching on the movement from Web 1.0 to 2.0, and the future as ‘Web Squared’. Next, I address the development of various data policy instruments intended to protect citizens’ privacy online and that facilitate citizens’ trust towards social media environments requiring personal information as the ‘cost of entry’. Drawing on academic and popular literature, I suggest that individuals participating in social media environments care deeply about their privacy and distrust (and dislike) the ubiquity of online surveillance, especially in the spaces they communicate and play. Daily experiences with data protection – often manifest in the form of privacy statements and policies – are seen as unapproachable, awkward, and obtuse by most social media users. Privacy statements and their oft-associated surveillance infrastructures contributes to a broader social malaise surrounding the effectiveness of formal data protection and privacy laws.

Given the presence of this malaise, and potential inability of contemporary data protection laws to secure individuals’ privacy, what can be done? I suggest that those involved in social media are well advised to develop an ethic of privacy to supplement legally required privacy statements. By adopting clear statements of ethics, supplemented with legal language and opt-in data disclosures of personal information, operators of social media environments can be part of the solution to society’s privacy malaise. Rather than outlining an ethic myself, I provide the building blocks for those attending to establish their own ethic. I do this by identifying dominant theoretical approaches to privacy: privacy as a matter of control, as an individual vs community vs hybrid issue, as an issue of knowledge and agency, and as a question of contextual data flows. With an understanding of these concepts, those attending will be well suited to supplement their privacy statements and policies with a nuanced and substantive ethics of privacy.

Call for Cyber-Surveillance Annotated Bibliographies

The New Transparency Project, as part of its international cyber-surveillance workshop, is issuing a call for annotated bibliographies around issues pertinent to their workshop. Again, given that issues concerning cyber-surveillance likely resonate with readers of this space, I wanted to alert you to this call. These bibliographies are meant to serve as a resource for those attending the May 12-15 workshop in 2011 at the University of Toronto. The deadline for submissions is September 15, 2010. Such submissions should be a maximum length of 500 words, and acceptance notifications will be issued by September 30, 2010. The authors (at least three) invited to prepare annotated bibliographies will each be paid $2000 (Cnd.) in two equal instalments. The first upon acceptance of the assignment, and the balance upon the bibliography’s satisfactory completion. The full call follows below:

Digitally Mediated Surveillance: From the Internet to Ubiquitous Computing

Digitally mediated surveillance (cyber-surveillance) is a growing and increasingly controversial aspect of every-day life in ‘advanced’ societies. Governments, corporations and even individuals are deploying digital techniques as diverse as social networking, video analytics, data-mining, wireless packet sniffing, RFID skimming, yet relatively little is known about actual practices and their implications. It is now over 15 years since the advent of the World Wide Web, and of widespread use of the Internet for electronic commerce, electronic government and social networking. The impending emergence of the ‘Internet of things’ promises (or threatens) to further insinuate digital surveillance capabilities into the fabric of daily life. Media alarmists have fueled a general popular understanding that one’s life is an open book when one goes online, making one increasingly subject to unwelcome intrusions. The reality is more complex and contingent on a variety of technological, institutional, legal and cultural factors.

Continue reading