Do You Know Who Your iPhone’s Been Calling?

The-Apple-iPhone-3GS-gets-a-phoneAn increasing percentage of Western society is carrying a computer with them, everyday, that is enabled with geo-locative technology. We call them smartphones, and they’re cherished pieces of technology. While people are (sub)consciously aware of this love-towards-technology, they’re less aware of how these devices are compromising their privacy, and that’s the topic of this post.

Recent reports on the state of the iPhone operating system show us that the device’s APIs permit incredibly intrusive surveillance of personal behaviour and actions. I’ll be walking through those reports and then writing somewhat more broadly about the importance of understanding how APIs function if scrutiny of phones, social networks, and so forth is to be meaningful. Further, I’ll argue that privacy policies – while potentially useful for covering companies’ legal backends – are less helpful in actually educating end-users about a corporate privacy ethos. These policies, as a result, need to be written in a more accessible format, which may include a statement of privacy ethics that is baked into a three-stage privacy statement.

iOS devices, such as the iPhone, iPad, Apple TV 2.0, and iPod touch, have Unique Device Identifiers (UDIDs) that can be used to discretely track how customers use applications associated with the device. A recent technical report, written by Eric Smith of PSKL, has shed light into how developers can access a device UDID and correlate it with personally identifiable information. UDIDs are, in effect, serial numbers that are accessible by software. Many of the issues surrounding the UDID are arguably similar to those around the Pentium III’s serial codes (codes which raised the wrath of the privacy community and were quickly discontinued. Report on PIII privacy concerns is available here).

Continue reading

Update: Feeva, Advertising, and Privacy

MusicBrainzServersWhen you spend a lot of time working in the areas of copyright, traffic sniffing and analysis, and the Internet’s surveillance infrastructure more generally, there is a tendency to expect bad things on a daily basis. This expectation is built up from years of horrors, and I’m rarely disappointed in my day-to-day research. Thus, when Wired reported that a company called Feeva was injecting locational information into packet headers the actions didn’t come across as surprising; privacy infringements as reported in the Wired piece are depressingly common. In response I wrote a brief post decrying the modification of packet-headers for geolocational purposes and was quoted by Jon Newton on P2Pnet on my reactions to what I understood at the time was going on.

After the post, and quotations turned up on P2Pnet, folks at Feeva quickly got ahold of me. I’ve since had a few conversations with them. It turns out that (a) there were factual inaccuracies in the Wired article; (b) Feeva isn’t the privacy-devastating monster that they came off as in the Wired article. Given my increased familiarity with the technology I wanted to better outline what their technology does and alter my earlier post’s conclusion: Feeva is employing a surprising privacy-protective advertising system. As it stands, their system is a whole lot better at limiting infringements on individuals’ privacy for advertising-related purposes than any other scalable model that I’m presently aware of.

Before I get into the post proper, however, I do want to note that I am somewhat limited in the totality of what I can speak about. I’ve spoken with both Feeva’s Chief Technology Officer, Miten Sampat, and Chief Privacy Officer, Dr. Don Lloyd Cook, and they’ve been incredibly generous in sharing both their time and corporate information. The two have been incredibly forthcoming with the technical details of the system employed and (unsurprisingly) some of this information is protected. As such, I can’t get into super-specifics (i.e. X technology uses Y protocol and Z hardware) but, while some abstractions are required, I think that I’ve managed to get across key elements of the system they’ve put in place.

Continue reading

Packet Headers and Privacy

One of the largest network vendors in the world is planning to offer their ISP partners an opportunity to modify HTTP headers to get ISPs into the advertising racket. Juniper Networks, which sells routers to ISPs, is partnering with Feeva, an advertising solutions company, to modify data packets’ header information so that the packets will include geographic information. These modified packets will be transmitted to any and all websites that the customer visits, and will see individuals receive targeted advertisements according to their geographical location. Effectively, Juniper’s proposal may see ISPs leverage their existing customer service information to modify customers’ data traffic for the purposes of enhancing the geographic relevance of online advertising. This poses an extreme danger to citizens’ locational and communicative privacy.

Should ISPs adopt Juniper’s add-on, we will be witnessing yet another instance of repugnant ‘innovation’ that ISPs are regularly demonstrating in their efforts to enhance their revenue streams. We have already seen them forcibly redirect customers’ DNS requests to ad-laden pages, provide (ineffective) ‘anti-infringement’ software to shield citizens from threats posed by three-strikes laws, and alter the payload content of data packets for advertising. After touching the payload – and oftentimes being burned by regulators – it seems as though the header is the next point of the packet that is to be modified in the sole interest of the ISPs and to the detriment of customers’ privacy.

Continue reading

Apple and Locational Data Sharing

Apple’s entrance into the mobile advertising marketplace was born with their announcement of iAd. Alongside iAd comes persistent locational surveillance of Apple’s customers for the advantage of advertisers and Apple. The company’s advertising platform is controversial because Apple gives it a privileged position in their operating system, iOS4, and because the platform can draw on an iPhone’s locational awareness (using the phone’s GPS functionality) to deliver up targeted ads.

In this post I’m going to first give a brief background on iAd and some of the broader issues surrounding Apple’s deployment of their advertising platform. From there, I want to recap what Steve Jobs stated in a recent interview at the All Things Digital 8 concerning how Apple approaches locational surveillance through their mobile devices and then launch into an analysis of Apple’s recently changed terms of service for iOS4 devices as it relates to collecting, sharing, and retaining records on an iPhone’s geographic location. I’ll finish by noting that Apple may have inadvertently gotten itself into serious trouble as a result of its heavy-handed control of the iAd environment combined with modifying the privacy-related elements of their terms of service: Apple seems to have awoken the German data protection authorities. Hopefully the Germans can bring some transparency to a company regularly cloaked in secrecy.

Apple launched the iAd beta earlier this year and integrates the advertising platform into their mobile environment such that ads are seen within applications, and clicking on ads avoids taking individuals out of the particular applications that the customers are using. iAds can access core iOS4 functionality, including locational information, and can be coded using HTML 5 to provide rich advertising experiences. iAd was only made possible following Apple’s January acquisition of Quattro, a mobile advertising agency. Quattro was purchased after Apple was previously foiled in acquiring AdMob by Google last year (with the FTC recently citing iAd as a contributing reason why the Google transaction was permitted to go through). Ostensibly, the rich advertising from iAds is intended to help developers produce cheap and free applications for Apple’s mobile devices while retaining a long-term, ad-based, revenue stream. Arguably, with Apple taking a 40% cut of all advertising revenue and limiting access to the largest rich-media mobile platform in the world, advertising makes sense for their own bottom line and its just nice that they can ‘help’ developers along the way… Continue reading

DoubleClick, Cookies, and Personal Information

The web operates the way it does, largely, because there is a lot of money to be made in the digitally-connected ecosystem. Without the revenues brought in by DoubleClick, as an example, Google would likely be reluctant to provide its free services that are intended to bring you into Google’s ad-serving environment. A question that needs to be asked, however, is whether DoubleClick and related ad delivery systems: (a) collect personal information; (b) if the answer to (a) is “yes”, then whether such collections might constitute privacy infringements.

In the course of this post, I begin by outlining what constitutes personal information and then proceed to outline DoubleClick’s method of collecting personal information. After providing these outlines, I argue that online advertising systems do collect personal information and that the definitions that Google offers for what constitutes ‘personal information’ are arguably out of line with Canadian sensibilities of what is ‘personal information’. As a result, I’ll conclude by asserting that violations may in fact be occurring, with the argument largely emerging from Nissembaum’s work on contextual integrity. Before proceeding, however, I’ll note that I’m not a lawyer, nor am I a law student: what follows is born from a critical reading of information about digital services and writings from philosophers, political scientists, technologists and privacy commissioners. Continue reading

Economics of Authenticity on Twitter

BerealI’m on Twitter all the time; it’s central to how I learn about discussions taking place about Deep Packet Inspection, a good way of finding privacy-folk from around the world, and lets me feel semi-socialized even though I’m somewhat reclusive. When I use the social networking service, I intersperse bits of ‘me’ (e.g. This wine sucks!) beside news articles I’ve found and believe would be useful to my colleagues, and add in some (attempts at) humor. In this sense, I try to make my Twitter feed feel ‘authentic’, meaning that it is reasonably reflective of how I want to present myself in digital spaces. Further, that presentation resonates (to varying extents) with how I behave in the flesh.

When you hear social-media enthusiasts talk about their media environment, authenticity (i.e. not pretending to be someone/something you’re really, absolutely, not) is the key thing to aim for. Ignoring the amusing Heideggerian implications of this use of authenticity (“How very They!), I think that we can take this to mean that there is a ‘currency’ in social media called ‘authenticity’. There are varying ways of gauging this currency. Continue reading