For several months I and a handful of others in the Canadian privacy and security community have been mulling over what Bill C-30, better known as Canada’s ‘lawful access’ legislation, might mean for the future of encryption policy in Canada. Today, I’m happy to announce that one of the fruits of these conversation, a paper that I’ve been working on with Kevin McArthur, is now public. The paper, titled “Understanding the Lawful Access Decryption Requirement,” spends a considerable amount of time considering the potential implications of the legislation. Our analysis considers how C-30 might force companies to adopt key escrows, or decryption key repositories. After identifying some of the problems associated with these repositories, we suggest how to amend the legislation to ensure that corporations will not have to establish key escrows. We conclude by outlining the dangers of leaving the legislative language as it stands today. The full abstract, and download link, follows.
Canada’s lawful access legislation, Bill C-30, includes a section that imposes decryption requirements on telecommunications service providers. In this paper we analyze these requirements to conclude that they may force service providers to establish key escrow, or decryption key retention, programs. We demonstrate the significance of these requirements by analyzing the implications that such programs could have for online service providers, companies that provide client software to access cloud services, and the subscribers of such online services. The paper concludes by suggesting an amendment to the bill, to ensure that corporations will not have to establish escrows, and by speaking to the dangers of not implementing such an amendment.
Download paper at the Social Sciences Research Network
For roughly the past two years I’ve been working with colleagues to learn how Automatic Number Plate Recognition (ANPR) systems are used in British Columbia, Canada’s westernmost province. As a result of this research one colleague, Rob Wipond, has published two articles on how local authorities and the RCMP are using ANPR technologies. Last February I disclosed some of our findings at the Reboot privacy and security conference, highlighting potential uses of the technology and many of the access to information challenges that we had experienced with respect to our research. Another, Kevin McArthur has written several pieces about ANPR on his website over the years and is largely responsible for Rob and I getting interested, and involved, in researching the technology and the practices associated with it.
The most recent piece of work to come out of our research is a paper that I, Joseph Savirimuthu, Rob, and Kevin have written. Joseph and I will be presenting it in Florence later this month. The paper, titled “ANPR: Code and Rhetorics of Compliance,” examines BC and UK deployments of ANPR systems to explore the rationales and obfuscations linked to the programs. The paper is presently in a late draft so if you have any comments or feedback then please send it my way. The abstract is below, and you can download the paper from the Social Sciences Research Network.
Automatic Number Plate Recognition (ANPR) systems are gradually entering service in Canada’s western province of British Columbia and are prolifically deployed in the UK. In this paper, we compare and analyze some of the politics and practices underscoring the technology in these jurisdictions. Drawing from existing and emerging research we identify key actors and how authorities marginalize access to the systems’ operation. Such marginalization is accompanied by rhetorics of privacy and security that are used to justify novel mass surveillance practices. Authorities justify the public’s lack of access to ANPR practices and technical characteristics as a key to securing environments and making citizens ‘safe’. After analyzing incongruences between authorities’ conceptions of privacy and security, we articulate means of resisting intrusive surveillance practices by reshaping agendas surrounding ANPR.
Download paper from the Social Sciences Research Network
UPDATE: The paper is now published in the European Journal of Law and Technology
In the wake of a stunning data breach the University of Victoria campus community could only hope that the institution would do everything it could to regain lost trust. One such opportunity arose this week, when controversial Google Streetview vehicles have been scheduled to canvas the campus. Unfortunately the opportunity was squandered: it is largely by accident that the campus community has – or will – learn that Google is capturing images and wireless access point information.
In this short post I want to discuss how seriously the University failed to disclose Google’s surveillance of the campus. I begin by providing a quick overview of Streetview’s privacy controversies. I then describe the serious data breach that UVic suffered earlier this year, which has left the institution with a significant trust deficit. A discussion of the institution’s failure to disclose Google’s presence to the community, and attempts to chill speech around Google’s presence, follows. I conclude by suggesting how institutions can learn from UVic’s failures and disclose the presence of controversial, potentially privacy invasive, actors in order to rebuild flagging trust deficits.
Google Streetview and Privacy
Streetview has been a controversial product since its inception. There were serious concerns as it captured images of people in sensitive places or engaged in indiscreet actions. Initially the company had a non-trivial means for individuals to remove images from the Google Streetview database. This process has subsequently been replaced with an option to blur sensitive information. Various jurisdictions have challenged Google’s conceptual and legal argument that taking images of public spaces with a Streetview vehicle are equivalent to a tourist taking pictures in a public space.
After disappearing for an extended period of time – to the point that the Globe and Mail reported that the legislation was dead – the federal government’s lawful access legislation is back on the agenda. In response to the Globe and Mail’s piece, the Public Safety Minister stated that the government was not shelving the legislation and, in response to the Minister’s statements, Open Media renewed the campaign against the bill. What remains to be seen is just how ‘lively’ this agenda item really is; it’s unclear whether the legislation remains on a back burner or if the government is truly taking it up.
While the politics of lawful access have been taken up by other parties, I’ve been pouring through articles and ATIP requests related to existing and future policing powers in Canada. In this post I first (quickly) outline communications penetration in Canada, with a focus on how social media services are used. This will underscore just how widely Canadians use digitally-mediated communications systems and, by extension, how many Canadians may be affected by lawful access powers. I then draw from publicly accessible sources to outline how authorities presently monitor social media. Next, I turn to documents that have been released through federal access to information laws to explicate how the government envisions the ‘nuts and bolts’ of their lawful access legislation. This post concludes with a brief discussion of the kind of oversight that is most appropriate for the powers that the government is seeking.
Research in Motion has a problem. For years they promoted themselves as a top-notch mobile security company. During those initial years most of their products were pitched at enterprise users.
Then RIM got into the consumer market.
Most consumers equate RIM’s products with security, email, BlackBerry Messenger (BBM), and a tepid suite of other smartphone features. Most of the people who report on the company tend to agonize over the fact that RIM complies with government surveillance laws. Such reports inevitably emerge each time that the public realizes that RIM meets its lawful access requirements for consumer-line products.
In this post, I want to briefly address some of the BBM-related security concerns and try to (again) correct the record surrounding the security promises of the messaging service. After outlining the deficits of consumer BBM products I briefly argue that we need to avoid fetishizing technology, encryption, or the law, and should instead focus on the democratic implications of the lawful access-style laws that governments use to access citizens’ communications.
In the interest of full disclose: I have family and friends who work at Research In Motion. I haven’t spoken to any of them concerning this post or its contents. None directly work on either BBM or RIM’s encryption systems.
Online voting is a serious issue that Canadians need to remain aware of and/or become educated about. I’ve previously written about issues surrounding Internet-based voting, and was recently interviewed about online elections in light of problems that the National Democratic Party (NDP) had during their 2012 leadership convention. While I’m generally happy with how the interview played out – and thankful to colleagues for linking me up with the radio station I spoke on – there were a few items that didn’t get covered in the interview because of time limitations. This post is meant to take up those missed items, as well as let you go and listen to the interview for yourself.
Public Dialogue Concerning the NDP Leadership ‘Attack’
There are claims that the attacks against the NDP’s online voting system were “sophisticated” and that “the required organization and the demonstrated orchestration of the attack indicates that this was a deliberate effort to disrupt or negate the election by a knowledgeable person or group.” Neither of these statements are entirely fair or particularly accurate. Publicly disclosed information indicates that around 10,000 IP addresses were used to launch a small Distributed Denial of Service (DDoS) attack against the voting system used during the NDP’s convention. To be clear: this is a relatively tiny botnet.
While such a botnet might justifiably overwhelm some small business networks, or other organizations that haven’t seen the need to establish protections against DDoS scenarios, it absolutely should not be capable of compromising an electoral process. Such a process should be significantly hardened: scalable infrastructure ought to have been adopted, and all services ought to be sitting behind a defensible security perimeter. To give you an understanding of just how cheap a botnet (of a much larger size) can be: in 2009, a 80,000-120,000 machine botnet would run around $200/day. You even got a 3-minute trial window! In 2010, VeriSign’s iDefence Intelligence Operations Team reported that a comparable botnet would run around $9/hr or $67/day.
If a few Google searches and a couple hundred dollars from a Paypal account can get you a small botnet (and give you access to technical support to help launch the attack, depending on who you rent your bots from) then we’re not dealing with a particularly sophisticated individual or group, or an individual or group that necessarily possesses very much knowledge about this kinds of attacks. Certainly the action of hiring a botnet demonstrates intent but it’s an incredibly amateurish attempt, and one that should have been easily stopped by the vendor in question.