Packet Headers and Privacy

One of the largest network vendors in the world is planning to offer their ISP partners an opportunity to modify HTTP headers to get ISPs into the advertising racket. Juniper Networks, which sells routers to ISPs, is partnering with Feeva, an advertising solutions company, to modify data packets’ header information so that the packets will include geographic information. These modified packets will be transmitted to any and all websites that the customer visits, and will see individuals receive targeted advertisements according to their geographical location. Effectively, Juniper’s proposal may see ISPs leverage their existing customer service information to modify customers’ data traffic for the purposes of enhancing the geographic relevance of online advertising. This poses an extreme danger to citizens’ locational and communicative privacy.

Should ISPs adopt Juniper’s add-on, we will be witnessing yet another instance of repugnant ‘innovation’ that ISPs are regularly demonstrating in their efforts to enhance their revenue streams. We have already seen them forcibly redirect customers’ DNS requests to ad-laden pages, provide (ineffective) ‘anti-infringement’ software to shield citizens from threats posed by three-strikes laws, and alter the payload content of data packets for advertising. After touching the payload – and oftentimes being burned by regulators – it seems as though the header is the next point of the packet that is to be modified in the sole interest of the ISPs and to the detriment of customers’ privacy.

Continue reading

Review: Delete – The Virtue of Forgetting in the Digital Age

Viktor Mayer-Schonberger’s new book Delete: The Virtue of Forgetting in the Digital Age (2009) is a powerful effort to rethink basic principles of computing that threaten humanity’s epistemological nature. In essence, he tries get impress upon us the importance of adding ‘forgetfulness’ to digital data collection process. The book is masterfully presented. It draws what are arguably correct theoretical conclusions (we need to get a lot better at deleting data to avoid significant normative, political, and social harms) while drawing absolutely devastatingly incorrect technological solutions (key: legislating ‘forgetting’ into all data formats and OSes). In what follows, I sketch the aim of the book, some highlights, and why the proposed technological solutions are dead wrong.

The book is concerned with digital systems defaulting to store data ad infinitum (barring ‘loss’ of data on account of shifting proprietary standards). The ‘demise of forgetting’ in the digital era is accompanied by significant consequences: positively, externalizing memory to digital systems preserves information for future generations and facilitates ease of recalls through search. Negatively, digital externalizations dramatically shift balances of power and obviate temporal distances. These latter points will become the focus of the text, with Mayer-Schonberger arguing that defaulting computer systems to either delete or degrade data over time can rebalance the challenges facing temporal obviations that presently accompany digitization processes. Continue reading

Google Dashboard – Does It Need Another Name?

TheWrongGoogleDashI like to pretend that I’m somewhat web savvy, and that I can generally guess where links on large websites will take me. This apparently isn’t the case with Blogger – I have a Blogger account to occasionally comment on blogs in the Google blogsphere, but despise the service enough that I don’t use the service. I do, however, have an interest in Google’s newly released Dashboard that is intended to show users what Google knows about them, and how their privacy settings are configured.

Given that I don’t use Blogger much, I was amazed and pleased to see that there was a link to the Dashboard in the upper-right hand corner of a Blogger page that I was reading when I logged in. Was this really the moment where Google made it easy for end-users to identify their privacy settings?

Alas, no. If I were a regular Blogger user I probably would have known better. What I was sent to when I clicked ‘Dashboard’ was my user dashboard for the blogger service itself. This seems to be a branding issue; I had (foolishly!) assumed that various Google environments that serve very different purposes would be labeled differently. In naming multiple things ‘dashboard’ it obfuscates access to a genuinely helpful service that Google is now providing. (I’ll note that a search for ‘Google Dashboard’ also calls up the App Status Dashboard, and that Google Apps also has a ‘Dashboard’ tab!)

Continue reading

Thinking About a ‘Privacy Commons’

unclesamsurveillanceIn some privacy circles there is a vision of creating a simple method of decoding privacy policies. As it stands, privacy policies ‘exist’ in a nebulous domain of legalese. Few people read these policies, and fewer still understand what they do (and do not) say. The same has traditionally been true of many copyright agreements. To assuage this issue surrounding copyright, the creative commons were created. Privacy groups are hoping to take some of the lessons from the creative commons and apply it to privacy policies.

I need to stress that this is a ‘thinking’ piece – I’ve been bothered by some of the models and diagrams used to express the ‘privacy commons’ because I think that while they’re great academic pieces, they’re nigh useless for the public at large. When I use the term ‘public at large’ and ‘useless’ what I am driving at is this: the creative commons is so good because it put together a VERY simple system that lets people quickly understand what copyright is being asserted over particular works. A privacy commons will live (or, very possibly, die) on its ease of access and use.

So, let’s think about use-value of any mode of description. The key issue with many commons approaches is that they are trying to do way too much all at once. Is there necessarily a need for a uniform commons statement, or is privacy sufficiently complicated that we adopt a medical privacy commons, a banking privacy commons, a social networking privacy commons, and so forth? Perhaps, instead of cutting the privacy cake so granularly (i.e. by market segment) we should try to boil down key principles and then offer real-language explanations for each principle’s application in particular business environments instead. This division of the commons is a topic that researchers appreciate and struggle with.

Continue reading

Teaching Portfolio – Save Everything?

Normally when I talk about retaining data, I talk about retaining targeted information – don’t save everything, only what you need, and (if the information is about other people) only what you said you’d retain for particular stated purposes.

I was at a TA Conference yesterday, and at the tail end of it one of the presentations was about creating a teaching portfolio and a teaching philosophy. In particular, we were encouraged to save everything from students that pertained to how we taught, as well as copies of course outlines/lecture notes/etc. The idea was that by aggregating all data, especially that from students, we could filter out what we don’t need – it’s easier to filter than to find more data.

This is the exact opposite way that I think that data retention should operate, and I’m not alone. The principles standing behind the EU’s Safehabour, as well as UoG privacy policies, both support my stance that all collected information must be targeted, people whose data is being collected must be aware of why it is being collected, and there must be a stipulation on the duration of time the information must be retained. I’m not really concerned with whether this particular presenter was recommending actions that at the least were in tension with the UoG’s privacy principles – what I’m interested in is whether you would keep all of this information? I can’t, not unless I’m totally up front with students, but I don’t know if that’s just me being particularly paranoid. Is retaining this information common practise in the teaching profession?

Online Data Storage and Privacy

Last week Google, Microsoft, and Apple revealed updates to their online data storage platforms – Google now lets users purchase additional space for their various Google applications, Microsoft provides a Live Skydrive (essentially an online network drive), and Apple completely revamped their .Mac solution.

The idea behind these services is that people that are already using, or are considering using, the aforementioned companies’ online services and will be enticed by the idea that they could store hordes of information in ‘safe’ repositories; we can trust that neither Google, Microsoft, or Apple would lose our data, right? This isn’t entirely true – at least Google and Microsoft have previously lost client data and could not always restore it. Individuals cannot count on any of these services, though they are likely to be more reliable than personal backups. What’s more, these online solutions just make life easier by letting users stop worrying about performing personal data backups – this is their real selling feature.

There are issues that emerges with all of these services – first clients cannot know what country their data is being stored in, potentially leaving their data subject to foreign surveillance laws, and second clients cannot verify what any of these corporations are actually doing with their data.

Continue reading