Data Retention, Protection, and Privacy

Data retention is always a sensitive issue; what is retained, for how long, under what conditions, and who can access the data? Recently, Ireland’s Memorandum of Understanding (MoU) between the government and telecommunications providers was leaked, providing members of the public with a non-redacted view of what these MoU’s look like and how they integrate with the European data retention directive. In this post, I want to give a quick primer on the EU data retention directive, identify some key elements of Ireland’s MoU and the Article 29 Data Protection Working Group’s evaluation of the directive more generally. Finally, I’ll offer a few comments concerning data protection versus privacy protection and use the EU data protection directive as an example. The aim of this post is to identify a few deficiencies in both data retention and data protection laws and argue that  privacy advocates and government officials to defend privacy first, approaching data protection as a tool rather than an end-in-itself.

A Quick Primer on EU Data Retention

In Europe, Directive 2006/24/EC (the Data Retention Directive, or DRD) required member-nations to pass legislation mandating retention of particular telecommunications data. Law enforcement sees retained data as useful for public safety reasons. A community-level effort was required to facilitate harmonized data retention; differences in members’ national laws meant that the EU was unlikely to have broadly compatible cross-national retention standards. As we will see, this concern remains well after the Directive’s passage. Continue reading

Review: Delete – The Virtue of Forgetting in the Digital Age

Viktor Mayer-Schonberger’s new book Delete: The Virtue of Forgetting in the Digital Age (2009) is a powerful effort to rethink basic principles of computing that threaten humanity’s epistemological nature. In essence, he tries get impress upon us the importance of adding ‘forgetfulness’ to digital data collection process. The book is masterfully presented. It draws what are arguably correct theoretical conclusions (we need to get a lot better at deleting data to avoid significant normative, political, and social harms) while drawing absolutely devastatingly incorrect technological solutions (key: legislating ‘forgetting’ into all data formats and OSes). In what follows, I sketch the aim of the book, some highlights, and why the proposed technological solutions are dead wrong.

The book is concerned with digital systems defaulting to store data ad infinitum (barring ‘loss’ of data on account of shifting proprietary standards). The ‘demise of forgetting’ in the digital era is accompanied by significant consequences: positively, externalizing memory to digital systems preserves information for future generations and facilitates ease of recalls through search. Negatively, digital externalizations dramatically shift balances of power and obviate temporal distances. These latter points will become the focus of the text, with Mayer-Schonberger arguing that defaulting computer systems to either delete or degrade data over time can rebalance the challenges facing temporal obviations that presently accompany digitization processes. Continue reading

Google Dashboard – Does It Need Another Name?

TheWrongGoogleDashI like to pretend that I’m somewhat web savvy, and that I can generally guess where links on large websites will take me. This apparently isn’t the case with Blogger – I have a Blogger account to occasionally comment on blogs in the Google blogsphere, but despise the service enough that I don’t use the service. I do, however, have an interest in Google’s newly released Dashboard that is intended to show users what Google knows about them, and how their privacy settings are configured.

Given that I don’t use Blogger much, I was amazed and pleased to see that there was a link to the Dashboard in the upper-right hand corner of a Blogger page that I was reading when I logged in. Was this really the moment where Google made it easy for end-users to identify their privacy settings?

Alas, no. If I were a regular Blogger user I probably would have known better. What I was sent to when I clicked ‘Dashboard’ was my user dashboard for the blogger service itself. This seems to be a branding issue; I had (foolishly!) assumed that various Google environments that serve very different purposes would be labeled differently. In naming multiple things ‘dashboard’ it obfuscates access to a genuinely helpful service that Google is now providing. (I’ll note that a search for ‘Google Dashboard’ also calls up the App Status Dashboard, and that Google Apps also has a ‘Dashboard’ tab!)

Continue reading

Thinking About a ‘Privacy Commons’

unclesamsurveillanceIn some privacy circles there is a vision of creating a simple method of decoding privacy policies. As it stands, privacy policies ‘exist’ in a nebulous domain of legalese. Few people read these policies, and fewer still understand what they do (and do not) say. The same has traditionally been true of many copyright agreements. To assuage this issue surrounding copyright, the creative commons were created. Privacy groups are hoping to take some of the lessons from the creative commons and apply it to privacy policies.

I need to stress that this is a ‘thinking’ piece – I’ve been bothered by some of the models and diagrams used to express the ‘privacy commons’ because I think that while they’re great academic pieces, they’re nigh useless for the public at large. When I use the term ‘public at large’ and ‘useless’ what I am driving at is this: the creative commons is so good because it put together a VERY simple system that lets people quickly understand what copyright is being asserted over particular works. A privacy commons will live (or, very possibly, die) on its ease of access and use.

So, let’s think about use-value of any mode of description. The key issue with many commons approaches is that they are trying to do way too much all at once. Is there necessarily a need for a uniform commons statement, or is privacy sufficiently complicated that we adopt a medical privacy commons, a banking privacy commons, a social networking privacy commons, and so forth? Perhaps, instead of cutting the privacy cake so granularly (i.e. by market segment) we should try to boil down key principles and then offer real-language explanations for each principle’s application in particular business environments instead. This division of the commons is a topic that researchers appreciate and struggle with.

Continue reading

Teaching Portfolio – Save Everything?

Normally when I talk about retaining data, I talk about retaining targeted information – don’t save everything, only what you need, and (if the information is about other people) only what you said you’d retain for particular stated purposes.

I was at a TA Conference yesterday, and at the tail end of it one of the presentations was about creating a teaching portfolio and a teaching philosophy. In particular, we were encouraged to save everything from students that pertained to how we taught, as well as copies of course outlines/lecture notes/etc. The idea was that by aggregating all data, especially that from students, we could filter out what we don’t need – it’s easier to filter than to find more data.

This is the exact opposite way that I think that data retention should operate, and I’m not alone. The principles standing behind the EU’s Safehabour, as well as UoG privacy policies, both support my stance that all collected information must be targeted, people whose data is being collected must be aware of why it is being collected, and there must be a stipulation on the duration of time the information must be retained. I’m not really concerned with whether this particular presenter was recommending actions that at the least were in tension with the UoG’s privacy principles – what I’m interested in is whether you would keep all of this information? I can’t, not unless I’m totally up front with students, but I don’t know if that’s just me being particularly paranoid. Is retaining this information common practise in the teaching profession?

Fear, Uncertainty, Doubt and Google Corporation

In recent months more and more attention has been directed towards Google’s data retention policies. In May of 2007 Peter Fleishcher of Google’s global privacy counsel established three key reasons for why his company had to maintain search records:

  1. To improve their services. Specifically, he writes “Search companies like Google are constantly trying to improve the quality of their search services. Analyzing logs data is an important tool to help our engineers refine search quality and build helpful new services . . . The ability of a search company to continue to improve its services is essential, and represents a normal and expected use of such data.”
  2. To maintain security and prevent fraud and abuse. “Data protection laws around the world require Internet companies to maintain adequate security measures to protect the personal data of their users. Immediate deletion of IP addresses from our logs would make our systems more vulnerable to security attacks, putting the personal data of our users at greater risk. Historical logs information can also be a useful tool to help us detect and prevent phishing, scripting attacks, and spam, including query click spam and ads click spam.”
  3. To comply with legal obligations to retrieve data. “Search companies like Google are also subject to laws that sometimes conflict with data protection regulations, like data retention for law enforcement purposes.” (Source)

Continue reading