On Wednesday, July 27 2011, I’ll be talking at the forum to stop online spying. The forum is part of a larger national campaign to raise awareness about the potentials of state surveillance and the implications of the Government of Canada’s (expected) surveillance legislation that will be announced in the fall 2011 session. Amongst other provisions, the legislation is expected to significantly reduce the degree of judicial oversight surrounding government acquisition of subscriber data – data that users of the Internet provide to their ISP, chat services (e.g. MSN, AIM), social networking sites (e.g. Google+, Orkut, Facebook), and other online communications mediums.
I’ll be giving a short talk entitled “Creeping Towards a State of Surveillance” that is meant as an introduction to the gravity and nuances of surveillance legislation. In it, I’ll first talk about what constitutes surveillance and what constitutes function creep. From there, I’ll briefly discuss the challenges associated with classifying data as ‘public’ or ‘private’ and the deficits of ‘anonymizing’ data. This will focus on distinguishing between so-called traffic and content data types, and the kinds of private information that can be extracted from ‘mere’ traffic data. I’ll wrap things up with a quick overview of the positive, and problematic, aspects of audits, advocates, and government commissioners in restraining the state’s appetite for intelligence for so-called policing actions.
If you’re interested in coming out then head over to StopOnlineSpying.com and register. The talks start at 1:30 and run until 5:30, and are a non-partisan discussion of the forthcoming legislative agenda. It’s meant to be heavy on discussion and maximally accessible to people that don’t focus their lives studying privacy, democracy, or telecommunications and has a good mix of advocates and scholars. If you can’t make the forum, but are either bothered by or want to learn more about the Canadian government’s expanded surveillance laws, check out the national campaign.
Siva Vaidhyanathan’s The Googlization of Everything (And Why We Should Worry) is a challenging, if flawed, book. Vaidhyanathan’s central premise is that we should work to influence or regulate search systems like Google (and, presumably, Yahoo! and Bing) to take responsibility for how the Web delivers knowledge to us, the citizens of the world. In addition to pursuing this premise, the book tries to deflate the hyperbole around contemporary technical systems by arguing against notions of technological determinism/utopianism.
As I will discuss, the book largely succeeds in pointing to reasons why regulation is an important policy instrument to keep available. The book also attempts to situate itself within the science and technology studies field, and here it is less successful. Ultimately, while Vaidhyanathan offers insight into Google itself – its processes, products, and implications of using the company’s systems – he is less successful in digging into the nature of technology, Google, culture, and society at a theoretical level. This leaves the reader with an empirical understanding of the topic matter without significant analytic resources to unpack the theoretical significance of their newfound empirical understandings.
Google Analytics have become an almost ever-present part of the contemporary Internet. Large, small, and medium-sized sites alike track their website visitors using Google’s free tools to identify where visitors are coming from, what they’re looking at (and for how long), where they subsequently navigate to, what keywords bring people to websites, and whether internal metrics are in line with advertising campaign goals. As of 2010, roughly 52% of all websites used Google’s analytics system, and it accounted for 81.4% of the traffic analysis tools market. As of this writing, Google’s system is used by roughly 58% of the top 10,000 websites, 57% of the top 100,000 websites, and 41.5% of the top million sites. In short, Google is providing analytics services to a considerable number of the world’s most commonly frequented websites.
In this short post I want to discuss the terms of using Google analytics. Based on conversations I’ve had over the past several months, it seems like many of the medium and small business owners are unaware of the conditions that Google places on using their tool. Further, independent bloggers are using analytics engines – either intentionally or by the default of their website host/creator – and are ignorant of what they must do to legitimately use them. After outlining the brief bits of legalese that are required by Google – and suggesting what Google should do to ensure terms of service compliance – I’ll suggest a business model/addition that could simultaneously assist in privacy compliance while netting an enterprising company/individual a few extra dollars in revenue.
The Offensive Internet: Speech, Privacy, and Reputation is an essential addition to academic, legal, and professional literatures on the prospective harms raised by Web 2.0 and social networking sites more specifically. Levmore and Nussbaum (eds.) have drawn together high profile legal scholars, philosophers, and lawyers to trace the dimensions of how the Internet can cause harm, with a focus on the United States’ legal code to understand what enables harm and how to mitigate harm in the future. The editors have divided the book into four sections – ‘The Internet and Its Problems’, ‘Reputation’, ‘Speech’, and ‘Privacy’ – and included a total of thirteen contributions. On the whole, the collection is strong (even if I happen to disagree with many of the policy and legal changes that many authors call for).
In this review I want to cover the particularly notable elements of the book and then move to a meta-critique of the book. Specifically, I critique how some authors perceive the Internet as an ‘extra’ that lacks significant difference from earlier modes of disseminating information, as well as the position that the Internet is a somehow a less real/authentic environment for people to work, play, and communicate within. If you read no further, leave with this: this is an excellent, well crafted, edited volume and I highly recommend it.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).