Do You Know Who Your iPhone’s Been Calling?

The-Apple-iPhone-3GS-gets-a-phoneAn increasing percentage of Western society is carrying a computer with them, everyday, that is enabled with geo-locative technology. We call them smartphones, and they’re cherished pieces of technology. While people are (sub)consciously aware of this love-towards-technology, they’re less aware of how these devices are compromising their privacy, and that’s the topic of this post.

Recent reports on the state of the iPhone operating system show us that the device’s APIs permit incredibly intrusive surveillance of personal behaviour and actions. I’ll be walking through those reports and then writing somewhat more broadly about the importance of understanding how APIs function if scrutiny of phones, social networks, and so forth is to be meaningful. Further, I’ll argue that privacy policies – while potentially useful for covering companies’ legal backends – are less helpful in actually educating end-users about a corporate privacy ethos. These policies, as a result, need to be written in a more accessible format, which may include a statement of privacy ethics that is baked into a three-stage privacy statement.

iOS devices, such as the iPhone, iPad, Apple TV 2.0, and iPod touch, have Unique Device Identifiers (UDIDs) that can be used to discretely track how customers use applications associated with the device. A recent technical report, written by Eric Smith of PSKL, has shed light into how developers can access a device UDID and correlate it with personally identifiable information. UDIDs are, in effect, serial numbers that are accessible by software. Many of the issues surrounding the UDID are arguably similar to those around the Pentium III’s serial codes (codes which raised the wrath of the privacy community and were quickly discontinued. Report on PIII privacy concerns is available here).

Continue reading

References for Traffic Analysis, Privacy, and Social Media

the-droids-youre-searching-forIn my presentation at Social Media Camp Victoria (abstract available!), I drew heavily from various academic literatures and public sources. Given the nature of talks, it’s nearly impossible to cite as you’re talking without entirely disrupting the flow of the presentation. This post is an attempted end-run/compromise to that problem: you get references and (what was, I hope) a presentation that flowed nicely!

There is a full list of references below, as well as a downloadable version of my keynote presentation (sorry powerpoint users!). As you’ll see, some references are behind closed academic paywalls: this really, really, really sucks, and is an endemic problem plaguing academia. Believe me when I say that I’m as annoyed as you are that the academic publishing system locks up the research that the public is paying for (actually, I probably hate it even more than you do!), but unfortunately I can’t do much to make it more available without running afoul of copyright trolls myself. As for books that I’ve drawn from, there are links to chapter selections or book reviews where possible.

Keynote presentation [4.7MB; made in Keynote ’09]

References:

Breyer, P. (2005). ’Telecommunications Data Retention and Human Rights: The Compatibility of Blanket Traffic Data Retention with the ECHR‘. European Law Journal 11: 365-375.

Chew, M., Balfanz, D., Laurie, B. (2008). ‘(Under)mining Privacy in Social Networks’, Proceedings of W2SP Web 20 Security and Privacy: 1-5.

Danezis, G. and Clayton, R. (2008). ‘Introducing Traffic Analysis‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerback Publications. 95-116.

Elmer, G. (2004). Profiling Machines: Mapping the Personal Information Economy. Cambridge, Mass.: The MIT Press.

Friedman, L. M. (2007). Guarding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy. Stanford: Stanford University Press. [Excellent book review of text]

Gandy Jr., O. H. (2006). ‘Data Mining, Surveillance, and Discrimination in the Post-9/11 Environment‘, in K. D. Haggerty and R. V. Ericson (eds.). The New Politics of Surveillance and Visibility. Toronto: University of Toronto Press, 79-110. [Early draft presented to the Political Economy Section, IAMCR, July 2002]

Kerr, I. (2002). ‘Online Service Providers, Fidelity, and the Duty of Loyalty‘, in T. Mendina and B. Rockenback (eds). Ethics and Electronic Information. Jefferson, North Carolina: McFarland Press.

Mitrou, L. (2008). ’Communications Data Retention: A Pandora’s Box for Rights and Liberties‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications, 409-434.

Rubinstein, I., Lee, R. D., Schwartz, P. M. (2008). ‘Data Mining and Internet Profiling: Emerging Regulatory and Technological Approaches‘. University of Chicago Law Review 75 261.

Saco, D. (1999). ‘Colonizing Cyberspace: National Security and the Internet’, in J. Weldes, M. Laffey, H. Gusterson, and R. Duvall (eds). Cultures of Insecurity: States, Communities, and the Production of Danger. Minneapolis: University of Minnesota Press, 261-292. [Selection from Google Books]

Simmons, J. L. (2009). “Buying You: The Government’s Use of Forth-Parties to Launder Data about ‘The People’,” in Columbia Business Law Review 2009/3: 950-1012.

Strandburg, K. J. (2008). ’Surveillance of Emergent Associations: Freedom of Associations in a Network Society‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications. 435-458.

Winner, L. (1986). The Whale and the Reactor. Chicago: University of Chicago Press. [Book Review]

Zittrain, J. (2008). The Future of the Internet: And How to Stop It. New Haven: Yale University Press. [Book Homepage]

Forthcoming Talk at Social Media Club Vancouver

Head-On-VancouverI’ve been invited to talk to Vancouver’s vibrant Social Media Club on October 7! I’m thrilled to be presenting, and will be giving a related (though very different) talk from the one a few days earlier at Social Media Camp Victoria. Instead of making traffic analysis a focus, I’ll be speaking more broadly of what I’ll be referring to as a ‘malaise of privacy’. This general discomfort of moving around online is (I will suggest) significantly related to the opaque privacy laws and protections that supposedly secure individuals’ privacy online as contrasted against the daily reality of identity theft, data breaches, and so forth. The thrust will be to provide those in attendance with the theoretical background to develop their own ethic(s) of privacy to make legal privacy statements more accessible and understandable.

See below for the full abstract:

Supplementing Privacy Policies with a Privacy Ethic

Social media platforms are increasingly common (and often cognitively invisible) facets of Western citizens’ lives; we post photos to Facebook and Flickr, engage in conversations on Orkut and Twitter, and relax by playing games on Zynga and Blizzard infrastructures. The shift to the Internet as a platform for mass real-time socialization and service provision demands a tremendous amount of trust on the part of citizens, and research indicates that citizens are increasingly concerned about whether their trust is well placed. Analytics, behavioural advertising, identity theft, and data mismanagements strain the public’s belief that digital systems are ‘privacy neutral’ whilst remaining worried about technological determinisms purported to drive socialized infrastructures.

For this presentation, I begin by briefly reviewing the continuum of the social web, touching on the movement from Web 1.0 to 2.0, and the future as ‘Web Squared’. Next, I address the development of various data policy instruments intended to protect citizens’ privacy online and that facilitate citizens’ trust towards social media environments requiring personal information as the ‘cost of entry’. Drawing on academic and popular literature, I suggest that individuals participating in social media environments care deeply about their privacy and distrust (and dislike) the ubiquity of online surveillance, especially in the spaces they communicate and play. Daily experiences with data protection – often manifest in the form of privacy statements and policies – are seen as unapproachable, awkward, and obtuse by most social media users. Privacy statements and their oft-associated surveillance infrastructures contributes to a broader social malaise surrounding the effectiveness of formal data protection and privacy laws.

Given the presence of this malaise, and potential inability of contemporary data protection laws to secure individuals’ privacy, what can be done? I suggest that those involved in social media are well advised to develop an ethic of privacy to supplement legally required privacy statements. By adopting clear statements of ethics, supplemented with legal language and opt-in data disclosures of personal information, operators of social media environments can be part of the solution to society’s privacy malaise. Rather than outlining an ethic myself, I provide the building blocks for those attending to establish their own ethic. I do this by identifying dominant theoretical approaches to privacy: privacy as a matter of control, as an individual vs community vs hybrid issue, as an issue of knowledge and agency, and as a question of contextual data flows. With an understanding of these concepts, those attending will be well suited to supplement their privacy statements and policies with a nuanced and substantive ethics of privacy.

Forthcoming Talk at Social Media Camp Victoria

Social-Media-LandscapeOn October 3 I’ll be presenting at Social Media Camp Victoria with Kris Constable about a few risks to privacy associated with social media. Kris is a leading Canadian privacy advocate and expert in information security and the operator of PrivaSecTec.

I’ll be talking about the use of traffic analysis and data mining practices that can be used to engage in massive surveillance of social networking environments and the value of drawing links between users rather than investigating the content of communications. The argumentative ‘thrust’ is that freedoms of expression and association may offer a approach to secure privacy in the face of weakened search laws. The full abstract can be read below.

Abstract:

Citizens are increasingly moving their communications and forms of expression onto social media environments that encourage both public and private collaborative efforts. Through social media, individuals can reaffirm existing relationships, give birth to new and novel communities and community-types, and establish the classical political advocacy groups that impact government decisions and processes. In coming together online for their various reasons, citizens expect that their capacity to engage with one another should, and in some respect does, parallel their expectations of privacy in the analogue world.

In this presentation, I first outline expectations and realities of privacy on and offline, with an emphasis on data traffic (i.e. non-content) analysis born from Signal Intelligence (SIGINT), and SIGINT’s use in civilian governmental practices. I then proceed to outline, in brief, how social media generally can be used to identify associations and a few reasons why such associations can undermine the communicative privacy expected and needed for the long-term survival of vibrant constitutional democracies. Rather than ending on a note of doom and gloom, however, I suggest a novel way of approaching privacy-related problems stemming from massive traffic data analysis in social media networks. While the language of freedom from unjustified searches is often used to resist traffic analysis, I draw from recent privacy scholarship to suggest that freedom of expression and association offers a novel (and possibly superior) approach to defending privacy interests in social media from SIGINT-based surveillance.

Data Retention, Protection, and Privacy

Data retention is always a sensitive issue; what is retained, for how long, under what conditions, and who can access the data? Recently, Ireland’s Memorandum of Understanding (MoU) between the government and telecommunications providers was leaked, providing members of the public with a non-redacted view of what these MoU’s look like and how they integrate with the European data retention directive. In this post, I want to give a quick primer on the EU data retention directive, identify some key elements of Ireland’s MoU and the Article 29 Data Protection Working Group’s evaluation of the directive more generally. Finally, I’ll offer a few comments concerning data protection versus privacy protection and use the EU data protection directive as an example. The aim of this post is to identify a few deficiencies in both data retention and data protection laws and argue that  privacy advocates and government officials to defend privacy first, approaching data protection as a tool rather than an end-in-itself.

A Quick Primer on EU Data Retention

In Europe, Directive 2006/24/EC (the Data Retention Directive, or DRD) required member-nations to pass legislation mandating retention of particular telecommunications data. Law enforcement sees retained data as useful for public safety reasons. A community-level effort was required to facilitate harmonized data retention; differences in members’ national laws meant that the EU was unlikely to have broadly compatible cross-national retention standards. As we will see, this concern remains well after the Directive’s passage. Continue reading

Solved: Bluetooth Devices Not Connecting to OSX

Apple Wireless KeyboardI’ve exclusively used Bluetooth devices to connect to my docked MacBook Pro for many, many months. It’s been a blissful period of time…one that came to a crashing halt this morning. After spending an aggravating period of time getting things working, I wanted to share with the Internet broadly (one) solution to getting both an Apple Wireless Bluetooth Keyboard and Magic Mouse (re)paired with OS X. I will note that I first ‘lost’ my Magic Mouse, and after a restart of my computer subsequently was unable to pair my Apple Wireless Bluetooth Keyboard.

Problem:

After months of blissful Bluetooth connectivity, I’ve awoken to discover that neither my Magic Mouse nor my Apple Bluetooth Keyboard are properly pairing. First my Magic Mouse failed to scroll, which led me to remove the Magic Mouse and attempt to pair it to my computer again. This attempt failed. I then rebooted my computer, and was still unable to pair my computer and Magic Mouse. After another restart, my Apple Bluetooth Keyboard was also unable to be be used as an input device with my computer. It is important to note that, while the Bluetooth Device Manager reported this failure to pair, both devices are reported as ‘connected’ under the Bluetooth icon in the OX X menu bar. Neither device, at this point, is responding to any input.

Continue reading