Mobile Security and the Economics of Ignorance

Day 24/ Mon 17 Aug 09  Mobile penetration is extremely high in Canada. 78% of Canadian households had a mobile phone in 2010, in young households 50% exclusively have mobiles, and 33% of Canadians generally lack landlines. Given that mobile phones hold considerably more information than ‘dumb’ landlines and are widely dispersed it is important to consider their place in our civil communications landscape. More specifically, I think we must consider the privacy and security implications associated with contemporary mobile communications devices.

In this post I begin by outlining a series of smartphone-related privacy concerns, focusing specifically on location, association, and device storage issues. I then pivot to a recent – and widely reported – survey commissioned by Canada’s federal privacy commissioner’s office. I assert that the reporting inappropriately offloads security and privacy decisions to consumers who are poorly situated to – and technically unable to – protect their privacy or secure their mobile devices. I support this by pointing to intentional exploitations of users’ ignorance about how mobile applications interact with their device environments and residing data. While the federal survey may be a useful rhetorical tool I argue that it has limited practical use.

I conclude by asserting that privacy commissioners, and government regulators more generally, must focus their attention upon the Application Programming Interfaces (APIs) of smartphones. Only by focusing on APIs will we redress the economics of ignorance that are presently relied upon to exploit Canadians and cheat them out of their personal information.

Continue reading

Data Retention, Protection, and Privacy

Data retention is always a sensitive issue; what is retained, for how long, under what conditions, and who can access the data? Recently, Ireland’s Memorandum of Understanding (MoU) between the government and telecommunications providers was leaked, providing members of the public with a non-redacted view of what these MoU’s look like and how they integrate with the European data retention directive. In this post, I want to give a quick primer on the EU data retention directive, identify some key elements of Ireland’s MoU and the Article 29 Data Protection Working Group’s evaluation of the directive more generally. Finally, I’ll offer a few comments concerning data protection versus privacy protection and use the EU data protection directive as an example. The aim of this post is to identify a few deficiencies in both data retention and data protection laws and argue that  privacy advocates and government officials to defend privacy first, approaching data protection as a tool rather than an end-in-itself.

A Quick Primer on EU Data Retention

In Europe, Directive 2006/24/EC (the Data Retention Directive, or DRD) required member-nations to pass legislation mandating retention of particular telecommunications data. Law enforcement sees retained data as useful for public safety reasons. A community-level effort was required to facilitate harmonized data retention; differences in members’ national laws meant that the EU was unlikely to have broadly compatible cross-national retention standards. As we will see, this concern remains well after the Directive’s passage. Continue reading

Packet Headers and Privacy

One of the largest network vendors in the world is planning to offer their ISP partners an opportunity to modify HTTP headers to get ISPs into the advertising racket. Juniper Networks, which sells routers to ISPs, is partnering with Feeva, an advertising solutions company, to modify data packets’ header information so that the packets will include geographic information. These modified packets will be transmitted to any and all websites that the customer visits, and will see individuals receive targeted advertisements according to their geographical location. Effectively, Juniper’s proposal may see ISPs leverage their existing customer service information to modify customers’ data traffic for the purposes of enhancing the geographic relevance of online advertising. This poses an extreme danger to citizens’ locational and communicative privacy.

Should ISPs adopt Juniper’s add-on, we will be witnessing yet another instance of repugnant ‘innovation’ that ISPs are regularly demonstrating in their efforts to enhance their revenue streams. We have already seen them forcibly redirect customers’ DNS requests to ad-laden pages, provide (ineffective) ‘anti-infringement’ software to shield citizens from threats posed by three-strikes laws, and alter the payload content of data packets for advertising. After touching the payload – and oftentimes being burned by regulators – it seems as though the header is the next point of the packet that is to be modified in the sole interest of the ISPs and to the detriment of customers’ privacy.

Continue reading

Review: Delete – The Virtue of Forgetting in the Digital Age

Viktor Mayer-Schonberger’s new book Delete: The Virtue of Forgetting in the Digital Age (2009) is a powerful effort to rethink basic principles of computing that threaten humanity’s epistemological nature. In essence, he tries get impress upon us the importance of adding ‘forgetfulness’ to digital data collection process. The book is masterfully presented. It draws what are arguably correct theoretical conclusions (we need to get a lot better at deleting data to avoid significant normative, political, and social harms) while drawing absolutely devastatingly incorrect technological solutions (key: legislating ‘forgetting’ into all data formats and OSes). In what follows, I sketch the aim of the book, some highlights, and why the proposed technological solutions are dead wrong.

The book is concerned with digital systems defaulting to store data ad infinitum (barring ‘loss’ of data on account of shifting proprietary standards). The ‘demise of forgetting’ in the digital era is accompanied by significant consequences: positively, externalizing memory to digital systems preserves information for future generations and facilitates ease of recalls through search. Negatively, digital externalizations dramatically shift balances of power and obviate temporal distances. These latter points will become the focus of the text, with Mayer-Schonberger arguing that defaulting computer systems to either delete or degrade data over time can rebalance the challenges facing temporal obviations that presently accompany digitization processes. Continue reading

Thinking About a ‘Privacy Commons’

unclesamsurveillanceIn some privacy circles there is a vision of creating a simple method of decoding privacy policies. As it stands, privacy policies ‘exist’ in a nebulous domain of legalese. Few people read these policies, and fewer still understand what they do (and do not) say. The same has traditionally been true of many copyright agreements. To assuage this issue surrounding copyright, the creative commons were created. Privacy groups are hoping to take some of the lessons from the creative commons and apply it to privacy policies.

I need to stress that this is a ‘thinking’ piece – I’ve been bothered by some of the models and diagrams used to express the ‘privacy commons’ because I think that while they’re great academic pieces, they’re nigh useless for the public at large. When I use the term ‘public at large’ and ‘useless’ what I am driving at is this: the creative commons is so good because it put together a VERY simple system that lets people quickly understand what copyright is being asserted over particular works. A privacy commons will live (or, very possibly, die) on its ease of access and use.

So, let’s think about use-value of any mode of description. The key issue with many commons approaches is that they are trying to do way too much all at once. Is there necessarily a need for a uniform commons statement, or is privacy sufficiently complicated that we adopt a medical privacy commons, a banking privacy commons, a social networking privacy commons, and so forth? Perhaps, instead of cutting the privacy cake so granularly (i.e. by market segment) we should try to boil down key principles and then offer real-language explanations for each principle’s application in particular business environments instead. This division of the commons is a topic that researchers appreciate and struggle with.

Continue reading