Rogers, Network Failures, and Third-Party Oversight

Photo credit: Faramarz HashemiDeep packet inspection (DPI) is a form of network surveillance and control that will remain in Canadian networks for the foreseeable future. It operates by examining data packets, determining their likely application-of-origin, and then delaying, prioritizing, or otherwise mediating the content and delivery of the packets. Ostensibly, ISPs have inserted it into their network architectures to manage congestion, mitigate unprofitable capital investment, and enhance billing regimes. These same companies routinely run tests of DPI systems to better nuance the algorithmic identification and mediation of data packets. These tests are used to evaluate algorithmic enhancements of system productivity and efficiency at microlevels prior to rolling new policies out to the entire network.

Such tests are not publicly broadcast, nor are customers notified when ISPs update their DPI devices’ long-term policies. While notification must be provided to various bodies when material changes are made to the network, non-material changes can typically be deployed quietly. Few notice when a deployment of significant scale happens…unless it goes wrong. Based on user-reports in the DSLreports forums it appears that one of Rogers’ recent policy updates was poorly tested and then massively deployed. The ill effects of this deployment are still unresolved, over sixty days later.

In this post, I first detail issues facing Rogers customers, drawing heavily from forum threads at DSLreports. I then suggest that this incident demonstrates multiple failings around DPI governance: a failure to properly evaluate analysis and throttling policies; a failure to significantly acknowledge problems arising from DPI misconfiguration; a failure to proactively alleviate inconveniences of accidental throttling. Large ISPs’ abilities to modify data transit and discrimination conditions is problematic because it increases the risks faced by innovators and developers who cannot predict future data discrimination policies. Such increased risks threaten the overall generative nature of the ends of the Internet. To alleviate some of these risks a trusted third-party should be established. This party would monitor how ISPs themselves govern data traffic and alert citizens and regulators if ISPs discriminate against ‘non-problematic’ traffic types or violate their own terms of service. I ultimately suggest that an independent, though associated, branch of the CRTC that is responsible for watching over ISPs could improve trust between Canadians and the CRTC and between customers and their ISPs.

Continue reading

Review: Internet Architecture and Innovation

Internet_Architecture_and_Innovation_coverI want to very highly recommend Barbara van Schewick’s Internet Architecture and Innovation. Various authors, advocates, scholars, and businesses have spoken about the economic impacts of the Internet, but to date there hasn’t been a detailed economic accounting of what may happen if/when ISPs monitor and control the flow of data across their networks. van Schewick has filled this gap by examining “how changes in the Internet’s architecture (that is, its underlying technical structure) affect the economic environment for innovation” and evaluating “the impact of these changes from the perspective of public policy” (van Schewick 2010: 2).

Her book traces the economic consequences associated with changing the Internet’s structure from one enabling any innovator to design an application or share content online to a structure where ISPs must first authorize access to content and design key applications  in house (e.g. P2P, email, etc). Barbara draws heavily from Internet history literatures and economic theory to buttress her position that a closed or highly controlled Internet not only constitutes a massive change in the original architecture of the ‘net, but that this change would be damaging to society’s economic, cultural, and political interests. She argues that an increasingly controlled Internet is the future that many ISPs prefer, and supports this conclusion with economic theory and the historical actions of American telecommunications corporations.

van Schewick begins by outlining two notions of the end-to-end principle undergirding the ‘net, a narrow and broad conception, and argues (successfully, in my mind) that ISPs and their interrogators often rely on different end-to-end understandings in making their respective arguments to the public, regulators, and each other.

Continue reading

Decrypting Blackberry Security, Decentralizing the Future

Photo credit: HonouCountries around the globe have been threatening Research in Motion (RIM) for months now, publicly stating that they would ban BlackBerry services if RIM refuses to provide decryption keys to various governments. The tech press has generally focused on ‘governments just don’t get how encryption works’ rather than ‘this is how BlackBerry security works, and how government demands affect consumers and businesses alike.’ This post is an effort to more completely respond to the second focus in something approximating comprehensive detail.

I begin by writing openly and (hopefully!) clearly about the nature and deficiencies of BlackBerry security and RIM’s rhetoric around consumer security in particular. After sketching how the BlackBerry ecosystem secures communications data, I pivot to identify many of the countries demanding greater access to BlackBerry-linked data communications. Finally, I suggest RIM might overcome these kinds of governmental demands by transitioning from a 20th to 21st century information company. The BlackBerry server infrastructure, combined with the vertical integration of the rest of their product lines, limits RIM to being a ‘places’ company. I suggest that shifting to a 21st century ‘spaces’ company might limit RIM’s exposure to presently ‘enjoyed’ governmental excesses by forcing governments to rearticulate notions of sovereignty in the face of networked governance.

Continue reading

Lesson Drawing from the Telegraph

By David DuganIn the domain of telecom policy, it seems like a series of bad ideas (re)arise alongside major innovations in communications systems and technologies. In this post, I want to turn to the telegraph to shed light on issues of communication bandwidth, security and privacy that are being (re)addressed by regulators around the world as they grapple with the Internet. I’ll speak to the legacy of data retention in analogue and digital communicative infrastructures, congestion management, protocol development, and encryption policies to demonstrate how these issues have arisen in the past, and conclude by suggesting a few precautionary notes about the future of the Internet. I do want to acknowledge, before getting into the meat of this post, that while the telegraph can be usefully identified as a precursor to the digital Internet because of the strong analogies between the two technological systems it did use different technological scaffolding. Thus, lessons that are drawn are based on the analogical similarities, rather than technical homogeneity between the systems.

The Telegraph

The telegraph took years to develop. Standardization was a particular issues, perhaps best epitomized by the French having an early telegraph system of (effectively) high-tech signal towers, whereas other nations struggled to develop interoperable cross-continental electrically-based systems. Following the French communication innovation (which was largely used to coordinate military endeavours), inventors in other nations such as Britain and the United States spent considerable amounts of time learning how to send electrical pulses along various kinds of cables to communicate information at high speed across vast distances.

Continue reading

Recording of ‘Traffic Analysis, Privacy, and Social Media’

The abstract for my presentation, as well as references, have already been made available. I wasn’t aware (or had forgotten) that all the presentations from Social Media Camp Victoria were going to be recorded and put on the web, but thought that others visiting this space might be interested in my talk. The camera is zoomed in on me, which means you miss some of the context provided by slides and references to people in the audience as I was talking. (Having quickly looked/listened to some of what I say, I feel as though I’m adopting a presentation style similar to a few people I watch a lot. Not sure how I think about that…The inability to actually walk around – being tethered to the mic and laptop – was particularly uncomfortable, which comes across in my body language, I think.)

Immediately after my presentation, Kris Constable of PrivaSecTech gives a privacy talk on social media that focuses on the inability to control personal information dissemination. Following his presentation, the two of us take questions from the audience for twenty or thirty minutes.

http://bchannelnews.tv/wp-content/plugins/flash-video-player/mediaplayer/player.swf

References for ‘Putting the Meaningful into Meaningful Consent’

By Stephanie BoothDuring my presentation last week at Social Media Club Vancouver – abstract available! – I drew from a large set of sources, the majority of which differed from my earlier talk at Social Media Camp Victoria. As noted earlier, it’s almost impossible to give full citations in the middle of a talk, but I want to make them available post-talk for interested parties.

Below is my keynote presentation and list of references. Unfortunately academic paywalls prevent me from linking to all of the items used, to say nothing of chapters in various books. Still, most of the articles should be accessible through Canadian university libraries, and most of the books are in print (if sometimes expensive).

I want to thank Lorraine Murphy and Cathy Browne for inviting me and doing a stellar job of publicizing my talk to the broader media. It was a delight speaking to the group at SMC Vancouver, as well as to reporters and their audiences across British Columbia and Alberta.

Keynote presentation [20.4MB; made in Keynote ’09]

References

Bennett, C. (1992). Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Ithica: Cornell University Press.

Bennett, C. (2008).  The Privacy Advocates:  Resisting the Spread of Surveillance.  Cambridge, Mass:  The MIT Press.

Carey, R. and Burkell, J. (2009). ‘A Heuristics Approach to Understanding Privacy-Protecting Behaviors in Digital Social Environments’, in I. Kerr, V. Steeves, and C. Lucock (eds.). Lessons From the Identity Trail: Anonymity, Privacy and Identity in a Networked Society. Toronto: Oxford University Press. 65-82.

Chew, M., Balfanz, D., Laurie, B. (2008). ‘(Under)mining Privacy in Social Networks’, Proceedings of W2SP Web 20 Security and Privacy: 1-5.

Fischer-Hübner, S., Sören Pettersson, J. and M. Bergmann, M. (2008). “HCI Designs for Privacy-Enhancing Identity Management’, in A. Acquisti and S. Gritzalis (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications. 229-252.

Flaherty, D. (1972). Privacy in Colonial England. Charlottesville, VA: University Press of Virginia.

Hoofnagle, Chris; King, Jennifer; Li, Su; and Turow, Joseph. (2010). “How different are young adults from older adults when it comes to information privacy attitudes and policies?” available at: http://www.ftc.gov/os/comments/privacyroundtable/544506-00125.pdf

Karyda, M., Koklakis, S. (2008). ’Privacy Perceptions among Members of Online Communities‘, in A. Acquisti and S. Gritzalis (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications, 253-266.

Kerr, I., Barrigar, J., Burkell, J, and Black K. (2009). ‘Soft Surveillance, Hard Consent: The Law and Psychology of Engineering Consent’, in I. Kerr, V. Steeves, and C. Lucock (eds.). Lessons From the Identity Trail: Anonymity, Privacy and Identity in a Networked Society. Toronto: Oxford University Press. 5-22.

Marwick, A. E., Murgia-Diaz, D., and Palfrey Jr., J. G. (2010). ‘Youth, Privacy and Reputation (Literature Review)’. Berkman Center Research Publication No. 2010-5; Harvard Law Working Paper No. 10-29. URL: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1588163

O’Reilly, T, and Battelle, J. (2008), ‘Web Squared: Web 2.0 Five Years On’. Presented at Web 2.0 Summit 2009, at http://www.web2summit.com/web2009/public/schedule/detail/10194

Steeves, V. (2009). ‘Reclaiming the Social Value of Privacy‘, in I. Kerr, V. Steeves, and C. Lucock (eds). Privacy, Identity, and Anonymity in a Network World: Lessons from the Identity Trail. New York: Oxford University Press.

Steeves, V, and Kerr, I. (2005). ‘Virtual Playgrounds and Buddybots: A Data-Minefield for Tweens‘, Canadian journal of Law and Technology 4(2), 91-98.

Turow, Joseph; King, Jennifer; Hoofnagle, Chris Jay; Bleakley, Amy; and Hennessy, Michael. (2009). “Contrary to what marketers say Americans reject tailored advertising and three activities that enable it,” Available at: http://graphics8.nytimes.com/packages/pdf/business/20090929-Tailored_Advertising.pdf

Turow, Joseph. (2007). “Cracking the Consumer Code: Advertisers, Anxiety, and Surveillance in the Digital Age,” in The New Politics of Surveillance and Visibility. Toronto: University of Toronto Press