One of the largest network vendors in the world is planning to offer their ISP partners an opportunity to modify HTTP headers to get ISPs into the advertising racket. Juniper Networks, which sells routers to ISPs, is partnering with Feeva, an advertising solutions company, to modify data packets’ header information so that the packets will include geographic information. These modified packets will be transmitted to any and all websites that the customer visits, and will see individuals receive targeted advertisements according to their geographical location. Effectively, Juniper’s proposal may see ISPs leverage their existing customer service information to modify customers’ data traffic for the purposes of enhancing the geographic relevance of online advertising. This poses an extreme danger to citizens’ locational and communicative privacy.
Viktor Mayer-Schonberger’s new book Delete: The Virtue of Forgetting in the Digital Age(2009)is a powerful effort to rethink basic principles of computing that threaten humanity’s epistemological nature. In essence, he tries get impress upon us the importance of adding ‘forgetfulness’ to digital data collection process. The book is masterfully presented. It draws what are arguably correct theoretical conclusions (we need to get a lot better at deleting data to avoid significant normative, political, and social harms) while drawing absolutely devastatingly incorrect technological solutions (key: legislating ‘forgetting’ into all data formats and OSes). In what follows, I sketch the aim of the book, some highlights, and why the proposed technological solutions are dead wrong.
The book is concerned with digital systems defaulting to store data ad infinitum (barring ‘loss’ of data on account of shifting proprietary standards). The ‘demise of forgetting’ in the digital era is accompanied by significant consequences: positively, externalizing memory to digital systems preserves information for future generations and facilitates ease of recalls through search. Negatively, digital externalizations dramatically shift balances of power and obviate temporal distances. These latter points will become the focus of the text, with Mayer-Schonberger arguing that defaulting computer systems to either delete or degrade data over time can rebalance the challenges facing temporal obviations that presently accompany digitization processes. Continue reading
In some privacycircles there is a vision of creating a simple method of decoding privacy policies. As it stands, privacy policies ‘exist’ in a nebulous domain of legalese. Few people read these policies, and fewer still understand what they do (and do not) say. The same has traditionally been true of many copyright agreements. To assuage this issue surrounding copyright, the creative commons were created. Privacy groups are hoping to take some of the lessons from the creative commons and apply it to privacy policies.
I need to stress that this is a ‘thinking’ piece – I’ve been bothered by some of the models and diagrams used to express the ‘privacy commons’ because I think that while they’re great academic pieces, they’re nigh useless for the public at large. When I use the term ‘public at large’ and ‘useless’ what I am driving at is this: the creative commons is so good because it put together a VERY simple system that lets people quickly understand what copyright is being asserted over particular works. A privacy commons will live (or, very possibly, die) on its ease of access and use.
So, let’s think about use-value of any mode of description. The key issue with many commons approaches is that they are trying to do way too much all at once. Is there necessarily a need for a uniform commons statement, or is privacy sufficiently complicated that we adopt a medical privacy commons, a banking privacy commons, a social networking privacy commons, and so forth? Perhaps, instead of cutting the privacy cake so granularly (i.e. by market segment) we should try to boil down key principles and then offer real-language explanations for each principle’s application in particular business environments instead. This division of the commons is a topic that researchers appreciate and struggle with.
People use Google and Yahoo! throughout their daily lives – they need to know how to get from point a to b, need to find ecommerce sites, need to search friends’ blogs, need to learn how to cook fish, and have (generally) grown used to having the equivalent of electronic encyclopedias at their fingertips at all times. I’m not going to bother addressing concerns that this might be detrimentally affecting how people learn to retain information (i.e. as information is increasingly retained as search strings rather than as info-articles) but want to instead briefly consider how search intersects with privacy.
We hear about the need to protect our private information all of the time. ‘Shred your bank statements’, ‘be wary of online commerce sites’, ‘never share personal information on the ‘net’, and other proclamations of wisdom are uttered in print and video on a regular basis which are, in most cases, completely ignored. Proponents of the commercialization of privacy use this as definitive proof that citizens really don’t care about their privacy like they did in days gone past – people are willing to give up their names, addresses, phone numbers, and other personal information to receive services that they want. In light of this regulators should just butt out – the market has spoken!
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).