Apple’s hardware and communications products continue to be widely purchased and used by people around the world. Comscore reported in March 2013 that Apple enjoyed a 35% market penetration in Canada, and their desktop and mobile computing devices remain popular choices for consumers. A messaging service, iMessage, spans the entire Apple product line. The company has stated that it “cannot decrypt that data.”
Apple’s statements concerning iMessage’s security are highly suspect. In what follows I summarize some of the serious questions about Apple’s encryption schemas. I then discuss why it’s important for consumers to know whether iMessages are secure from third-party interception. I conclude by outlining how Canadians who use the iMessage application can use Canadian privacy law to ascertain the validity of Apple’s claims against those of the company’s critics.
iMessage is an instant messaging service developed by Apple. Earlier this year there were claims that the US Government could not intercept iMessages at telecommunications services’ interfaces because iMessage uses what’s called ‘end-to-end’ encryption. Such encryption places the private keys in the hands of the individual subscribers or their computing devices at the ‘ends’ of the network. Private keys are used to transform cipher text into plaintext: they decrypt communications so that the recipient can read what has been communicated. However, while the content of communications could not be captured at the service provider (e.g. AT&T, Rogers, Bell) civil advocates doubted Apple’s security claims on the basis that “Apple’s service is not designed to be government-proof.”
Doubts concerning the security of Apple’s messaging service were raised because of the multitude of ‘moving parts’ involved in the service. Matt Green, a noted academic and professional cryptographer, wrote that there were “lots of moving parts. TLS. Client certificates. Certificate signing requests. New certificates delivered via XML. Oh my.” In the same post, Green stated that “Apple operates as a Certificate Authority for iMessage devices. And as a Certificate Authority, it may be able to substantially undercut the security of the protocol.” The result was that the security of Apple’s system was rendered suspect.
When Edward Snowden released documents about the National Security Agency’s (NSA) PRISM program, Apple was one of the companies listed as providing the NSA direct access to the company’s services. In response, Apple issued a press release that included statements about iMessage and FaceTime security:
conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read them. Apple cannot decrypt that data. Similarly, we do not store data related to customers’ location, Map searches or Siri requests in any identifiable form.
The issue, however, is that Apple’s encryption mechanism(s) run contra to a basic test that can (roughly) evaluate whether a cloud service provider can potentially access subscribers’ private keys. Specifically, the ‘mud puddle’ test runs as follows:
- Drop your device(s) in a mud puddle.
- Slip in the mud puddle and crack yourself on the head. When you regain consciousness you’ll be perfectly fine except that you are utterly incapable of recalling your device password(s) or key(s).
- Try to get your data back.
If you can recover your data then your cloud provider can access your unencrypted data, either because they have your private key or because they have an escrowed key system that can, similarly, provide access to your private communications. As demonstrated by both Green and Ashkan Soltani, an independent security researcher and consultant, Apple lets you recover data after you reset your password. While this recovery mechanism is undoubtably convenient for consumers, the ability to recover this data suggests that neither a password or device key secures messages at rest. Your data is not sufficiently encrypted by Apple that it can be considered secure from modestly motivated law enforcement or intelligence organizations.
Even individuals who do not use Apple’s backup system could see their messages intercepted by third-parties because Apple relies on a key lookup service to facilitate the end-to-end encryption. As noted by Green,
iMessage lets you associate multiple public keys with the same account – for example, you can you add a device (such as a Mac) to receive copies of messages sent to your phone. From what I can tell, the iMessage app gives the sender no indication of how many keys have been associated with a given iMessage recipient, nor does it warn them if the recipient suddenly develops new keys.
As a result, it is theoretically possible for Apple to add a new recipient to a communication – such as interested government parties – to effectively afford ‘end-to-end’ security whilst simultaneously enabling government surveillance. From this analysis of iMessage’s security posture, two things become apparent:
- It is possible to recover iMessage data, and the recovery does not seem to be predicated on passwords or iDevice-specific encryption keys. The result is that new devices can be purchased and used to get a history of messages. That this history is available to end-users shows that Apple’s system does not pass the muddy puddle test. If Green is right on this point then Apple, arguably, is being deceptive (if not outright lying) concerning whether messages could be provided to a third-party after they are in the iCloud backup storage environment.
- It is possible to include multiple parties to a communication without the sender necessarily knowing. The consequence of this is that while Apple – in this case – may not be defeating the end-to-end encryption by actually cracking the encryption they simply don’t have to: Apple, here, would be engaged in ‘truthy’ explanations of security on the basis data could be provided to third-parties by just bypassing challenges brought about by encryption.
So, why does it matter that iMessage communications might be vulnerable to third-party surveillance? In what follows, I briefly address the importance of communicative privacy and then move on to suggest how Canadians can ascertain the truth(iness) of Apple’s claims and get evidence to test Green’s hypotheses.
Privacy protections are, generally, in society’s interests because privacy protects a range of activities that individuals might otherwise not participate in. Though privacy is, to some extent, recognized as securing the capabilities of individuals is it far more than an individual right or means of protection. Julie Cohen argues that we must move away from the liberal understanding of privacy as a negative liberty because, per such an understanding, “[p]rivacy preserves negative space around individuals who are already full formed or mostly fully formed, affording shelter from the pressures of societal and technological change.” Such a conceptual framing does not account for how individuals are embedded in intersubjective conditions and, as such, insufficiently appreciates the broader dangers linked to undercutting the capacity for enjoying privacy.
Conceptions of privacy must account for the intersubjective condition of individuals because “[s]elfhood and social shaping are not mutually exclusive. Subjectivity, and hence selfhood, exists in the space between the experience of autonomous selfhood and the reality of social shaping. It is real in the only way that counts: we experience ourselves as having identities that are more or less fixed. But it is also malleable and emergent and embodied, and if we are honest, that too accords with experience.” Privacy is responsible for establishing the space within which this maleability is possible, letting us create spaces to explore, to communicate, and to associate with others without fearing the consequences of our non-harmful (but potentially socially adventurous) explorations coming to public light.
Apple’s claims of privacy are significant on the basis that the company is stating that iMessage is ‘safe’ for such explorations. Apple’s claims, as a result, are very different from companies that provide ‘contingent’ privacy assurances to their subscribers, that is, privacy that depends on corporate or government whims. Whereas ‘whim privacy’ depends on subscribers prospectively giving up some degree of their privacy for unknown reasons, Apple’s stated scenario suggests that a whim cannot undercut the secured and private communications associated with their messaging service. As opposed to privacy-by-policy, Apple is attesting that they adhere to a privacy-by-design communications model. On the basis of this ‘pro-privacy’ model the company can attempt to build market share since its subscribers, unlike those of other services, are supposedly safe from third-party, content-based, surveillance of iMessages.
In short, iMessage security statements matter because when people believe they are involved in private communications they act and speak in ways that they might suppress in public. The differentiation in behaviour is not a ground for suspicion but is reflective of how humans conduct their daily basis: what you say, or talk about, or how you say or talk about, a sensitive topic may vary between friends and co-workers or between government officials and family members. We adopt different norms of privacy based on the social scenario we are operating in and Apple is assuring its users that they are in a fairly private domain of communications.
What Can Canadians Do?
At present we cannot, certifiably, know what Apple is or isn’t doing. We also can’t certifiably say that the critics of Apple are correct or not. Fortunately, there may be a legal avenue that Canadians could exploit to learn the truth(iness) behind Apple’s statements.
As I’ve written previously, under Canadian law all Canadians can request that companies explain and disclose the kinds of personal information that they retain about the requesting Canadian citizen or resident. Section 4.9, Schedule 1 of Canada’s federal privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA), legitimizes such requests and compels organizations to respond to requests when those companies have significant connections with Canada. Companies that establish an economic relationship with Canadians establish such a ‘significant’ relationship, and thus fall under the Privacy Commissioner of Canada’s jurisdiction. Consequently, Canadians can avail themselves of PIPEDA to compel Apple to disclose what information the company has collected and retained about Canadian citizens.
So, a Canadian citizen who uses the iMessage system can file a request to Apple asking the company to disclose the citizen’s iMessage history (perhaps modifying the template I developed to request data from social networking sites). If Apple can provide the plaintext then it is clear that Apple’s critics are correct and the company has been involved in deceptive business practices. If, however, Apple asserts that it cannot provide the plaintext of your iMessages you can then complain to the Privacy Commissioner. Your complaint could reference Matt Green’s work, and assert that Apple is not compliant with PIPEDA because their statements are incompatible with the known technical aspects of the iMessage program.
Having filed a formal complaint with the Commissioner we might learn what is actually going on, should the Commissioner investigate. Their investigation could – ought to – engage in a technical analysis of the iMessage system, such that we learn whether Apple’s critics are correct or not in their hypotheses. In an ideal world the critics will be proven wrong and we will discover that iMessages actually are secure. We might also learn some additional details about how Apple secures its customers’ communications. In a more likely world, however, we will find that a complaint is well-founded and that corrections to Apple’s descriptions of message security are required. Such corrections may simply entail updates to Apple’s public statements concerning message security: there is no guarantee or expectation that the company will have to change its existing practices.
iMessage is a popular communications program for Apple customers. It’s massively used around the world – as of January 2013 there were over 2 billion iMessages sent daily – and the service is accompanied by some pretty impressive security statements. As should be clear from my post, someone is either lying or is wrong about the cryptographic underpinnings of Apple’s iMessage system. Consumers deserve to know which is which so that they can communicate with the correct assurances concerning their communications’ confidentiality and privacy. In the face of Apple’s non-transparency regarding how iMessage actually works it’s time to get some government regulators involved.
- End-to-end encryption was touted as an important security aspect of iMessage and was explicitly noted in the product’s press release. ↩
- For more on how Certificate Authorities can be used to compromise message security, see: Christopher Soghoian and Sid Stamm. (2010). “Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL.” SSRN. Published April 16, 2010. Last accessed November 26, 2012. Available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1591033. See also: Nevena Vratonjic, Julien Freudiger, Vincent Bindschaedler and Jean-Pierre Hubaux. (2011). “The Inconvenient Truth about Web Certificates.” Presented at The Workshop on Economics of Information Security (WEIS), Fairfax, Virginia, USA, June 14–15, 2011. Available at: http://infoscience.epfl.ch/record/165676. ↩
- Green does acknowledge that “ it’s technically possible Apple uses security questions to encrypt the iCloud backups, and if this were true, it would strengthen the claim that it’s not possible for people other than the sender and receiver to read messages.” However, he also stated that “ it’s not likely that security questions are being used to derive an encryption key, since the answers don’t contain enough entropy to securely encrypt the data.” ↩
- Daniel J. Solove. (2008). Understanding Privacy. Cambridge, Mass.: Harvard University Press. Pp. 173–5. ↩
- Julie Cohen. (2013). “What Is Privacy For.” 126 Harvard Law Review 1904. Pp. 1907. ↩
- Julie Cohen. (2013). “What is Privacy For.” 126 Harvard Law Review 1904. Pp. 1909. ↩
- For more, see Helen Nissembaum. (2009). Privacy in Context. Stanford: Stanford University Press. ↩