Distinguishing Between Mobile Congestions

by Simon TunbridgeThere is an ongoing push to ‘better’ monetize the mobile marketplace. In this near-future market, wireless providers use DPI and other Quality of Service equipment to charge subscribers for each and every action they take online. The past few weeks have seen Sandvine and other vendors talk about this potential, and Rogers has begun testing the market to determine if mobile customers will pay for data prioritization. The prioritization of data is classified as a network neutrality issue proper, and one that demands careful consideration and examination.

In this post, I’m not talking about network neutrality. Instead, I’m going to talk about what supposedly drives prioritization schemes in Canada’s wireless marketplace: congestion. Consider this a repartee to the oft-touted position that ‘wireless is different’: ISPs assert that wireless is different than wireline for their own regulatory ends, but blur distinctions between the two when pitching ‘congestion management’ schemes to customers. In this post I suggest that the congestion faced by AT&T and other wireless providers has far less to do with data congestion than with signal congestion, and that carriers have to own responsibility for the latter.

Continue reading

Publication – Digital Inflections: Post-Literacy and the Age of Imagination

Earlier this year I was contacted by CTheory to find and interview interesting people that are doing work at the intersection of theory, digitality, and information. Michael Ridley, the Chief Information Officer and Chief Librarian at the University of Guelph, was the first person that came to mind. I met with Michael earlier this year for a face-to-face discussion, and our conversation has since been transcribed and published at CTheory. Below is the full introduction to the interview.

“… [O]ne of the things about librarians is that they’re subversive in the nicest possible ways. They’ve been doing the Wikileak thing for centuries, but just didn’t get the credit for it. This is what we try to do all the time; we try to reduce the barriers and open up that information.”
— Michael Ridley

Self-identifying as the University’s Head Geek and Chief Dork, Michael Ridley leads a life of the future by reconfiguring access to the past. As Chief Librarian and Chief Information Office of the University of Guelph, Ridley spends his days integrating digital potentialities and the power of imagination with the cultural and historical resources of the library. Seeing the digital as a liminal space between the age of the alphabet and an era of post-literacy, he is transforming the mission of libraries: gone are the days where libraries primarily focus on developing collections. Today, collections are the raw materials fueling the library as a dissonance engine, an engine enabling collaborative, cross-disciplinary imaginations.

With a critical attitude towards the hegemony of literacy, combined with a prognostication of digitality’s impending demise, Ridley’s position at the University of Guelph facilitates radical reconsiderations of the library’s present and forthcoming roles. He received his M.L.S. from the University of Toronto, his M.A from the University of New Brunswick, and has been a professional librarian since 1979. So far, Michael has served as President of the Canadian Association for Information Science, President of the Ontario Library Association, Board member of the Canadian Association of Research Libraries, and Chair of the Ontario Council of Universities. He is presently a board member of the Canadian Research Knowledge Network and of the Canadian University Council of CIOs. He has received an array of awards, and was most recently awarded the Miles Blackwell Award for Outstanding Academic Librarians by the Canadian Association of College and University Libraries. Ridley has published extensively about the intersection of networks, digital systems, and libraries, including “The Online Catalogue and the User,” “Providing Electronic Library Reference Service: Experiences from the Indonesia-Canada Tele-Education Project,” “Computer-Mediated Communications Systems,” and “Community Development in the Digital World.” He has also co-edited volumes one and two of The Public-Access Computer Systems Review. Lately, his work has examined the potentials of post-literacy, which has seen him teach an ongoing undergraduate class on literacy and post-literacy as well as giving presentations and publishing on the topic.

Read the full conversation at CTheory

iPhone Promiscuity

Photo credit: Steve KeysI’ve written a fair bit about mobile phones; they’re considerable conveniences that are accompanied by serious security, privacy, and technical deficiencies. Perhaps unsurprisingly, Apple’s iPhone has received a considerable amount of criticism in the press and by industry because of the Apple aura of producing ‘excellent’ products combined with the general popularity of their mobile device lines.

In this short post I want to revisit two issues I’ve previously written about: the volume of information that the iPhone emits when attached to WiFi networks and its contribution to carriers’ wireless network congestion. The first issue is meant to further document here, for my readers and my own projects, just how much information the iPhone makes available to third-parties. The second, however, reveals that a technical solution resolves the underlying cause of wireless congestion associated with Apple products. Thus, trapping customers into bucket-based data plans in response to congestion primarily served financial bottom lines instead of customers’ interests. This instance of leveraging an inefficient (economic) solution to a technical problem might, then, function as a good example of the difference between ‘reasonable technical management’ that is composed of technical and business goals versus the management of just the network infrastructure itself.

Continue reading

Rogers, Network Failures, and Third-Party Oversight

Photo credit: Faramarz HashemiDeep packet inspection (DPI) is a form of network surveillance and control that will remain in Canadian networks for the foreseeable future. It operates by examining data packets, determining their likely application-of-origin, and then delaying, prioritizing, or otherwise mediating the content and delivery of the packets. Ostensibly, ISPs have inserted it into their network architectures to manage congestion, mitigate unprofitable capital investment, and enhance billing regimes. These same companies routinely run tests of DPI systems to better nuance the algorithmic identification and mediation of data packets. These tests are used to evaluate algorithmic enhancements of system productivity and efficiency at microlevels prior to rolling new policies out to the entire network.

Such tests are not publicly broadcast, nor are customers notified when ISPs update their DPI devices’ long-term policies. While notification must be provided to various bodies when material changes are made to the network, non-material changes can typically be deployed quietly. Few notice when a deployment of significant scale happens…unless it goes wrong. Based on user-reports in the DSLreports forums it appears that one of Rogers’ recent policy updates was poorly tested and then massively deployed. The ill effects of this deployment are still unresolved, over sixty days later.

In this post, I first detail issues facing Rogers customers, drawing heavily from forum threads at DSLreports. I then suggest that this incident demonstrates multiple failings around DPI governance: a failure to properly evaluate analysis and throttling policies; a failure to significantly acknowledge problems arising from DPI misconfiguration; a failure to proactively alleviate inconveniences of accidental throttling. Large ISPs’ abilities to modify data transit and discrimination conditions is problematic because it increases the risks faced by innovators and developers who cannot predict future data discrimination policies. Such increased risks threaten the overall generative nature of the ends of the Internet. To alleviate some of these risks a trusted third-party should be established. This party would monitor how ISPs themselves govern data traffic and alert citizens and regulators if ISPs discriminate against ‘non-problematic’ traffic types or violate their own terms of service. I ultimately suggest that an independent, though associated, branch of the CRTC that is responsible for watching over ISPs could improve trust between Canadians and the CRTC and between customers and their ISPs.

Continue reading

Review: Internet Architecture and Innovation

Internet_Architecture_and_Innovation_coverI want to very highly recommend Barbara van Schewick’s Internet Architecture and Innovation. Various authors, advocates, scholars, and businesses have spoken about the economic impacts of the Internet, but to date there hasn’t been a detailed economic accounting of what may happen if/when ISPs monitor and control the flow of data across their networks. van Schewick has filled this gap by examining “how changes in the Internet’s architecture (that is, its underlying technical structure) affect the economic environment for innovation” and evaluating “the impact of these changes from the perspective of public policy” (van Schewick 2010: 2).

Her book traces the economic consequences associated with changing the Internet’s structure from one enabling any innovator to design an application or share content online to a structure where ISPs must first authorize access to content and design key applications  in house (e.g. P2P, email, etc). Barbara draws heavily from Internet history literatures and economic theory to buttress her position that a closed or highly controlled Internet not only constitutes a massive change in the original architecture of the ‘net, but that this change would be damaging to society’s economic, cultural, and political interests. She argues that an increasingly controlled Internet is the future that many ISPs prefer, and supports this conclusion with economic theory and the historical actions of American telecommunications corporations.

van Schewick begins by outlining two notions of the end-to-end principle undergirding the ‘net, a narrow and broad conception, and argues (successfully, in my mind) that ISPs and their interrogators often rely on different end-to-end understandings in making their respective arguments to the public, regulators, and each other.

Continue reading

Decrypting Blackberry Security, Decentralizing the Future

Photo credit: HonouCountries around the globe have been threatening Research in Motion (RIM) for months now, publicly stating that they would ban BlackBerry services if RIM refuses to provide decryption keys to various governments. The tech press has generally focused on ‘governments just don’t get how encryption works’ rather than ‘this is how BlackBerry security works, and how government demands affect consumers and businesses alike.’ This post is an effort to more completely respond to the second focus in something approximating comprehensive detail.

I begin by writing openly and (hopefully!) clearly about the nature and deficiencies of BlackBerry security and RIM’s rhetoric around consumer security in particular. After sketching how the BlackBerry ecosystem secures communications data, I pivot to identify many of the countries demanding greater access to BlackBerry-linked data communications. Finally, I suggest RIM might overcome these kinds of governmental demands by transitioning from a 20th to 21st century information company. The BlackBerry server infrastructure, combined with the vertical integration of the rest of their product lines, limits RIM to being a ‘places’ company. I suggest that shifting to a 21st century ‘spaces’ company might limit RIM’s exposure to presently ‘enjoyed’ governmental excesses by forcing governments to rearticulate notions of sovereignty in the face of networked governance.

Continue reading