The-Apple-iPhone-3GS-gets-a-phoneAn increasing percentage of Western society is carrying a computer with them, everyday, that is enabled with geo-locative technology. We call them smartphones, and they’re cherished pieces of technology. While people are (sub)consciously aware of this love-towards-technology, they’re less aware of how these devices are compromising their privacy, and that’s the topic of this post.

Recent reports on the state of the iPhone operating system show us that the device’s APIs permit incredibly intrusive surveillance of personal behaviour and actions. I’ll be walking through those reports and then writing somewhat more broadly about the importance of understanding how APIs function if scrutiny of phones, social networks, and so forth is to be meaningful. Further, I’ll argue that privacy policies – while potentially useful for covering companies’ legal backends – are less helpful in actually educating end-users about a corporate privacy ethos. These policies, as a result, need to be written in a more accessible format, which may include a statement of privacy ethics that is baked into a three-stage privacy statement.

iOS devices, such as the iPhone, iPad, Apple TV 2.0, and iPod touch, have Unique Device Identifiers (UDIDs) that can be used to discretely track how customers use applications associated with the device. A recent technical report, written by Eric Smith of PSKL, has shed light into how developers can access a device UDID and correlate it with personally identifiable information. UDIDs are, in effect, serial numbers that are accessible by software. Many of the issues surrounding the UDID are arguably similar to those around the Pentium III’s serial codes (codes which raised the wrath of the privacy community and were quickly discontinued. Report on PIII privacy concerns is available here).

Application developers can combine the device identifier with the following attributes: authenticated login information (e.g. a banking application can link the UDID with a full banking consumer profile), (nick)name of iOS device owner, type of connection (e.g. wifi versus 3G), model type (version of iPhone, iPad, iPod Touch), home address, phone number, and geolocation information (both GPS and Skyhook/Apple collected information). Significantly, there are no popups or warnings alerting users that this data is being collected – the actual API facilitates a level of data collection far exceeding what most consumers would expect, and stands in direct contrast with Steve Jobs’ statement at the most recent All Things D conference, which I’ve previously transcribed as follows:

We’ve always had a very different view of privacy than some of our colleagues in the Valley. We take privacy extremely seriously. That’s one of the reasons we have the curated apps store. We have rejected a lot of apps that want to take a lot of your personal data and suck it up into the cloud. Privacy means people know what they’re signing up for. In plain English, and repeatedly, that’s what it means. Ask them. Ask them every time. Make them tell you to stop asking if they get tired of your asking them. Let them know precisely what you’re going to do with their data.

Unless I’ve missed an entire regime of collection notices, I had no idea such information was being harvested by application developers until I’d read Smith’s report. Arguably of equal significance, where SSL encryption is used to transmit data Smith can determine the receiving host, but not what is actually transmitted to that host. Where traffic terminates at, the receiver is responsible for iAds, but it is less obvious who other receivers are, their need/desire for data, or their long-term data retention and processing policies. In essence, there’s no clear way of knowing what information is being hoovered up or what’s being done with it. ‘Free’ applications, in particular, are guilty of collecting UDID information, proving once again that if you’re not paying for a product – if you’re not a paying customer – you (and your personal information) are likely the actual product.

Also of interest in Smith’s report is that cookies are being placed in applications’ folders, and not Safari’s Cookies folder. This prevents end-users from easily removing the cookies using the iDevice’s options to do so (Settings>>Safari>>Clear History/Cookies/Cache). Combined with the incredible duration of these cookies – sometimes expiring only after 20 years – application developers can determine when an individual switches devices; when you switch (upgrade, use multiple iDevices, etc) the company puts a cookie with the same ID on the device as soon as you login, and adds the new device information to their customer databases. Given that the cookies have such excessive durations, it’s unlikely that new cookies will ever be issued to a user unless they create a separate, brand new, account. The ‘cookie problem’ is made even worse in light of Mobile Safari permitting the creation of client-side storage databases. These are often used by advertisers – Ars Technica has a walk through of Ringleader Digital’s system – to track users as they move around the Internet. Such databases are, for almost all intents and purposes, impossible to remove. The only way to ‘opt-out’ of them is to (a) realize what’s going on; (b) go to Ringleader’s website and have them place a unique identifier in the database they create on your device that indicates you’ve chosen to opt out of the tracking. After demonstrating technical ingenuity and a willingness to (in effect) exploit HTML 5 and Safari Mobile, you just have to trust them to do the right thing after you opt-out. Few users will likely ever know that these databases exist, let alone where and how to opt-out, and likely even fewer trust Ringleader to follow through with their privacy promises.

Requiring a unique identifier to avoid surveillance is less than promising, and lacks transparency from the end-user’s perspective. Moreover, Apple almost implies that this kind of behaviour is permissible, given that has developed its own opt-out system relying on similar mechanisms for their iAd advertising ecosystem. Further, Apple’s willingness to bury locational tracking information in the newest iteration of iOS – accessed through Settings >> General >> Location Services >> (Settings for applications) – shows that while Steve might talk about privacy, Apple certainly isn’t integrating an ethos of privacy by design in their products themselves, nor are they shaping the application ecosystem to respect privacy. In this way, Apple and Facebook appear to be closely aligned in how they ‘address’ privacy in their respective third-party application ecosystems.

Of course, the developers using UDIDs, setting near-permanent cookies, and deploying ‘zombie’ databases are all taking advantage of existing APIs. Such APIs are required to develop applications, and the application marketplaces are (arguably) what drive so much of iDevices’ desirability. The potentialities of APIs themselves, however, are reflections of a set of value decisions made by Apple (and by developers of APIs more generally). The UDID is not provided for nefarious reasons; arguably it is there so that developers have some kind of unique identifier that they can take advantage of instead of spending hundreds of hours creating a secured login and authentication system for each applications they produce. By making the UDID available Apple is reducing the ‘friction’ individuals experience when they actually use an application, which enhances the likelihood that individuals will actually try out the application in question. There are substantial costs entailed by field registration forms; each field significantly reduces the likelihood that a customer will actually go through with an identity-related transaction. Friction promotes consciousness about privacy and/or an awareness of the customer’s limited temporal resources.

In the process of developing a wider ecosystem – one that is dominantly intended to fuel the sales of hardware and secondarily to enhance revenue streams in the various iStores – Apple has a responsibility associated with their APIs. The ‘privacy’ policy that Apple makes available to users of iDevices is absurd; the last one was 57 pages long, on the iPhone screen, and has various buried clauses. Admittedly, I think that Apple is trying to do what their lawyers are telling them is right – if you read the privacy policy it broadly permits many of the surveillance processes discussed above (e.g. collection of locational information and other information) – but without a knowledge of the actual APIs an end-user is entirely unable to contextualize the policy. It is patently unreasonable to expect your end-users to be developers (or lawyers), with access to developer tools and time to competently play with them, just to understand your corporation’s privacy policy.

So, what is the solution then? In an ideal world Apple would genuinely adhere to what Steve Jobs stated in his All Things D interview: when an application on an iDevice wants any kind of personal information – and a unique signifier should constitute such information as soon as combined with information that can identify an individual – it will ask you. When the UDID, your mobile phone number, address, type of wireless connection used, and so forth is harvested, developers should be required to ask permission before grabbing it, and this requirement should be hardcoded into the developer API. Perhaps the Europeans will be able to force Apple (and other API developers whose APIs enable privacy invasive practices) to add this ‘friction’ to their ecosystem. Maybe there are grounds for a formal complaint to the Privacy Commissioner of Canada, on grounds that individuals cannot give meaningful consent to these collections of personal information, nor can they necessarily revoke this consent after having once given it. Both situations seems to demand the attention of Canadian regulators.

If you’re an application developer – today – what is the solution? Ideally, you implement an opt-in system but, failing that, developers should be required to adopt a three-layer privacy agreement with their end users, one that is prominently displayed at the first launch of the program and with each reinstallation/update. The first ‘layer’ should have understandable, actionable, privacy statements. We do X, we do not do Y and we believe in Z would all make good ‘privacy principle’ statements. These statements should be guided by an actual formal ethics of privacy – one that is embedded into the API, the code of the application, and the ecosystem more broadly – that when instantiated would curtail privacy invasive possibilities during the development stage.

The second layer may be more detailed, better integrating the principles and ethics with clear legal accountability. Whereas layer one might be a single page, layer two might be two or three pages, in a readable font and written at an accessible level of language; get a readability expert to go through it: if a child of thirteen years of age can’t understand it, you need to re-write the first layer, and if a seventeen year old can’t understand layer two, it needs a rewrite/edit.

The final layer will be the typical legalese, but contextualized by layers one and two. This should mean that individuals can actually frame some of the more obscure clauses should they read layer three…and if those individuals can’t, it should at least give opposing counsel and regulators grounds to argue that developers are(n’t) misleading their users.

Privacy policies are largely garbage from an end-user perspective: they’re almost entirely unreadable, unclear, and demand careful amounts of time and high degrees of education to parse. API developers need to adopt ethics of privacy, instil it throughout their code, and cut off those abusing the API in manners that clearly violate both the terms and spirit of the privacy ethic and policy. APIs should be run past privacy-minded technologists prior to being rolled out, and be modified where it is clear that the API permits and encourages invasive surveillance without the end-user’s consent. Ideally we’d see mass opt-in requirements for this kind of surveillance but I fear that this is unlikely, at least in the short term. Developing an ethic of privacy, combined with accessible three-layer privacy policies, might at least keep application and API developers honest at best, and give grounds for suit in front of the FTC, OPC, and EU Commission at worst.