I want to toss up a few links that I’ve found particularly interesting/helpful over the past couple of months. I’ll begin with a way to read, move to a review of the newest tool for electronic education, and conclude with an article concerning the commercialization of the core platforms electronic resources are accessed from.
What if those citizens used data-mining principles to prepare and filter their reading? Donal Latumahina has eight processes that you can use to get the most out of the books that you’re reading, processes that are guided by the objective to get the greatest possible amount of useful information from the text. It’s amazing what happens when you objectively structure your reading, rather than just letting yourself be carried along by it.
There has been a sustained argument across the ‘net and in traditional circles, that privacy is being redefined before our very eyes. Oftentimes, we see how a word transforms by studying its etymology – this is helpful in understanding the basis of the words that we utter. What do we do, however, when we work to redefine not just a word’s definition (such as what the term ‘cool’ refers to) but its normative horizons?
In redefining the work ‘privacy’ to account for how people are empirically protecting their privacy, are we redefining the word, or the normative horizon that it captures? Moreover, can we genuinely assume that the term’s normative guide is changing simply because of recent rapid changes in technology increase the difficulty in exercising our right to privacy in digitized environments? To argue that these normative boundaries are shifting largely because of how digital networks have been programmed presupposes that the networks cannot be designed in any other way, that digital content will flow as it does now the same way that gravity acts on our physical bodies as it presently does. The difficulty in maintaining such an analogy is that it assumes that there are natural laws to an immanent programming languages that structure how we can participate in digital environments.
Normally when I talk about retaining data, I talk about retaining targeted information – don’t save everything, only what you need, and (if the information is about other people) only what you said you’d retain for particular stated purposes.
I was at a TA Conference yesterday, and at the tail end of it one of the presentations was about creating a teaching portfolio and a teaching philosophy. In particular, we were encouraged to save everything from students that pertained to how we taught, as well as copies of course outlines/lecture notes/etc. The idea was that by aggregating all data, especially that from students, we could filter out what we don’t need – it’s easier to filter than to find more data.
This is the exact opposite way that I think that data retention should operate, and I’m not alone. The principles standing behind the EU’s Safehabour, as well as UoG privacy policies, both support my stance that all collected information must be targeted, people whose data is being collected must be aware of why it is being collected, and there must be a stipulation on the duration of time the information must be retained. I’m not really concerned with whether this particular presenter was recommending actions that at the least were in tension with the UoG’s privacy principles – what I’m interested in is whether you would keep all of this information? I can’t, not unless I’m totally up front with students, but I don’t know if that’s just me being particularly paranoid. Is retaining this information common practise in the teaching profession?
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).