ISP Audits in Canada

Union members call for an independent investigation to ensure safety in Milwaukee County.There are ongoing concerns in Canada about the CRTC’s capacity to gauge and evaluate the quality of Internet service that Canadians receive. This was most recently brought to the fore when the CRTC announced that Canada ranked second to Japan in broadband access speeds. Such a stance is PR spin and, as noted by Peter Nowak, “[o]nly in the halcyon world of the CRTC, where the sky is purple and pigs can fly, could that claim possibly be true.” This head-in-the-sands approach to understanding the Canadian broadband environment, unfortunately, is similarly reflective in the lack of a federal digital strategy and absolutely inadequate funding for even the most basic governmental cyber-security.

To return the CRTC from the halcyon world it is presently stuck within, and establish firm empirical data to guide a digital economic strategy, the Government of Canada should establish a framework to audit ISPs’ infrastructure and network practices. Ideally this would result in an independent body that could examine the quality and speed of broadband throughout Canada. Their methodology and results would be publicly published and could assure all parties – businesses, citizens, and consumers – that they could trust or rely upon ISPs’ infrastructure. Importantly, having an independent body research and publish data concerning Canadian broadband would relieve companies and consumers from having to assume this role, freeing them to use the Internet for productive (rather than watchdog-related) purposes.

Continue reading

Review of The Googlization of Everything

Googlizationcover_0Siva Vaidhyanathan’s The Googlization of Everything (And Why We Should Worry) is a challenging, if flawed, book. Vaidhyanathan’s central premise is that we should work to influence or regulate search systems like Google (and, presumably, Yahoo! and Bing) to take responsibility for how the Web delivers knowledge to us, the citizens of the world. In addition to pursuing this premise, the book tries to deflate the hyperbole around contemporary technical systems by arguing against notions of technological determinism/utopianism.

As I will discuss, the book largely succeeds in pointing to reasons why regulation is an important policy instrument to keep available. The book also attempts to situate itself within the science and technology studies field, and here it is less successful. Ultimately, while Vaidhyanathan offers insight into Google itself – its processes, products, and implications of using the company’s systems – he is less successful in digging into the nature of technology, Google, culture, and society at a theoretical level. This leaves the reader with an empirical understanding of the topic matter without significant analytic resources to unpack the theoretical significance of their newfound empirical understandings.

Continue reading

Review of The Offensive Internet: Speech, Privacy, and Reputation

9780674050891-lgThe Offensive Internet: Speech, Privacy, and Reputation is an essential addition to academic, legal, and professional literatures on the prospective harms raised by Web 2.0 and social networking sites more specifically. Levmore and Nussbaum (eds.) have drawn together high profile legal scholars, philosophers, and lawyers to trace the dimensions of how the Internet can cause harm, with a focus on the United States’ legal code to understand what enables harm and how to mitigate harm in the future. The editors have divided the book into four sections – ‘The Internet and Its Problems’, ‘Reputation’, ‘Speech’, and ‘Privacy’ – and included a total of thirteen contributions. On the whole, the collection is strong (even if I happen to disagree with many of the policy and legal changes that many authors call for).

In this review I want to cover the particularly notable elements of the book and then move to a meta-critique of the book. Specifically, I critique how some authors perceive the Internet as an ‘extra’ that lacks significant difference from earlier modes of disseminating information, as well as the position that the Internet is a somehow a less real/authentic environment for people to work, play, and communicate within. If you read no further, leave with this: this is an excellent, well crafted, edited volume and I highly recommend it.

Continue reading

ISPs, Advocates, and Framing at the 2011 Telecom Summit

3183290111_989c5b1bec_bEach year Canada’s leaders in telecommunications gather at the Canadian Telecommunications Summit to talk about ongoing policy issues, articulate their concerns about Canada’s status in the world of telecommunications, and share lessons and experiences with one another. This years Summit was no exception. While some commentators have accused this year’s event of just rehashing previous years’ content – it is true that each Summit does see similar topics on the conference agenda, with common positions taken each year – there are some interesting points that emerged this year.

Specifically, discussions about the valuation of telecom services regularly arose, discussions of supply and demand in the Canadian ISP space, as well as some interesting tidbits about the CRTC. For many people in the industry what I’ll be talking about isn’t exactly new; those not inside the industry’s fold, however, may find elements of this interesting. After outlining some of the discussions that took place I will point to something that was particularly striking throughout the Summit events I attended: Open Media loomed like a spectre throughout, shaping many of the discussions and talking points despite not having a single formal representative in attendance.

Continue reading

Security, Hierarchy, and Networked Governance

UnlockedThe capacity for the Internet to route around damage and censorship is dependent on there being multiple pathways for data to be routed. What happens when there are incredibly few pathways, and when many of the existing paths contain hidden traps that undermine communications security and privacy? This question is always relevant when talking about communications, but has become particularly topical given recent events that compromised some of the Internet’s key security infrastructure and trust networks.

On March 22 2011, Tor researchers disclosed a vulnerability in the certificate authority (CA) system. Certificates are used to encrypt data traffic between parties and to guarantee that security certificates are actually issued to the parties holding them. The CA system underpins a massive number of the Internet’s trust relationships; when individuals log into their banks, some social networking services, and many online email services, their data traffic is encrypted to prevent a third-party from listening into the content of the communication. Those encrypted sessions are made possible by the certificates issued by certificate authorities. The Tor researchers announced that an attacker had compromised a CA and issued certificates that let the attacker impersonate the security credentials associated with many of the world’s most prominent websites. Few individuals would ever detect this subterfuge. In effect, Tor researchers discovered that a central element of the Internet’s trust network was broken.

In this post I want to do a few things. First, I’ll briefly describe the attack and its accompanying risks. This will, in part, see me briefly discuss modes of surveillance and motivations for different gradients of surveillance. I next address a growing problem for today’s Internet users: the points of trust we depend on, such as CAs and the DNS infrastructure, are increasingly unreliable. As a result, states can overtly or subtly manipulate to disrupt or monitor their citizens’ communications. Finally, I suggest that in spite of these points of control, states are increasingly limited in their capacities to unilaterally enforce their will. As a consequence of networked governance, and its accompanying power structures, citizens can impose accountability on states and limit their ability to (re)distribute power across and between nodes of networks. Thus, networked governance not only transforms state power but redistributes (some) power to non-state actors, empowering those actors to resist illegitimate state actions.

Continue reading

Technology and Politics in Tunisia and Iran: Deep Packet Surveillance

Middleeast-IranFor some time, I’ve been keeping an eye on how the Iranian government monitors, mediates, and influences data traffic on public networks. This has seen me write several posts, here and elsewhere, about the government’s usage of deep packet inspection, the implications of Iranian government surveillance, and the challenges posed by Iranian ISPs’ most recent network updates. Last month I was invited to give a talk at the Pacific Centre for Technology and Culture about the usage of deep packet inspection by the Iranian and Tunisian governments.

Abstract

Faced with growing unrest that is (at least in part) facilitated by digital communications, repressive nation-states have integrated powerful new surveillance systems into the depths of their nations’ communications infrastructures. In this presentation, Christopher Parsons first discusses the capabilities of a technology, deep packet inspection, which is used to survey, analyze, and modify communications in real-time. He then discusses the composition of the Iranian and Tunisian telecommunications infrastructure, outlining how deep packet inspection is used to monitor, block, and subvert encrypted and private communications. The presentation concludes with a brief reflection on how this same technology is deployed in the West, with a focus on how we might identify key actors, motivations, and drivers of the technology in our own network ecologies.

Note: For more information on the Iranian use of deep packet inspection, see ‘Is Iran Now Actually Using Deep Packet Inspection?