pexels-photo-1294886.jpeg
Photo by Mateusz Dach on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of the devices that it sells in a stated intent to expand protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images using the Messages application. The second will monitor for the presence of Child Sexual Abuse Material (CSAM) in iCloud Photos. The third will monitor for searches pertaining to CSAM. These features are planned to be activated in the United States in the next versions of Apple’s operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of iCloud Photos for CSAM content. I begin with a background of Apple’s efforts to monitor for CSAM content on their services before providing a description of the newly announced CSAM surveillance system. I then turn to outline some problems, complications, and concerns with this new child safety feature. In particular, I discuss the challenges facing Apple in finding reputable child safety organizations with whom to partner, the potential ability to region-shift to avoid the surveillance, the prospect of the surveillance system leading to ongoing harms towards CSAM survivors, the likelihood that Apple will expand the content which is subject to the company’s surveillance infrastructure, and the weaponization of the CSAM surveillance infrastructure against journalists, human rights defenders, lawyers, opposition politicians, and political dissidents. I conclude with a broader discussion of the problems associated with Apple’s new CSAM surveillance infrastructure.

A previous post focused on the surveillance children’s messages to monitor for sexually explicit photos. Future posts will address the third child safety feature that Apple has announced, as well as the broader implications of Apple’s child safety initiatives.

Background to Apple Monitoring for CSAM

Apple has previously worked with law enforcement agencies to combat CSAM though the full contours of that assistance are largely hidden from the public. In May 2019, Mac Observer noted that the company had modified their privacy policy to read, “[w]e may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material” (emphasis not in original). Per Forbes, Apple places email messages under surveillance when they are routed through its systems. Mail is scanned and if CSAM content is detected then Apple automatically prevents the email from reaching its recipient and assigns an employee to confirm the CSAM content of the message. If the employee confirms the existence of CSAM content the company subsequently provides subscriber information to the National Center for Missing and Exploited Children (NCMEC) or a relevant government agency.1

Apple’s ability to conduct the aforementioned surveillance is possible because of how user data is encrypted when it is stored on, or routed through, Apple’s systems. As noted in the company’s iCloud security overview,2 the majority of users’ data is secured on Apple’s servers using keys that Apple possesses and controls. Apple distinguishes between information that it can decrypt and that which it cannot, with the latter being referred to as “sensitive information.” Information that Apple does not consider sensitive includes, but is not limited to:

  • Backups
  • Contacts
  • Find My (Devices and people)
  • iCloud Drive (and files stored in it)
  • Messages content stored in iCloud
  • Notes that are not secured with a user’s unique password
  • Photos
  • Voice memos
  • Mail stored on iCloud

As should be apparent, Apple has historically been able to access a wide swathe of users’ content to monitor for CSAM and has, at least in recent years, built some surveillance infrastructure to detect CSAM content. The company also complies with lawful government orders for the contents of users’ iCloud accounts.3 Whenever it detects and confirms that its users are using Apple services to send or receive or store CSAM content the company will disable the account associated with the content. As noted by Gabriel Dance, deputy investigations editor for the New York Times, Apple has not historically applied automated CSAM detection systems to users’ photos which are stored on Apple’s servers.

Monitoring for CSAM in Apple Photos

Whereas Apple currently monitors at least their email service for whether CSAM content has been transmitted, their newly announced mode of surveillance relies on individuals’ devices as well as Apple’s servers to detect CSAM content.4 Under this new surveillance system, devices that run iOS and iPadOS 15 and operate in the United States will receive a secured set of hashes of known CSAM content. Individuals who own or control the Apple devices will not have access to these hashes.

When a photo is stored in the Apple Photos application on an Apple device, and iCloud Photos is enabled, a perceptual hashing function will analyze the photo and convert it to a hash. In doing so, the hashing function will assess elements of a given photo such that a slightly modified version of the photo will result in an identical hash being derived for both the unmodified as well as the modified CSAM photo. Apple has indicated that this process is intended to reduce the error rate associated with hashing while also preventing individuals from slightly modifying CSAM content in an effort to slip past the hashing mechanism. 5

Before that photo is uploaded to iCloud Photos the photo’s hash will be compared against a pre-loaded list of CSAM hashes. After this comparison is performed each photo to be uploaded will receive a cryptographic voucher that records the match results, with the voucher and photo both then being uploaded to iCloud Photos. Apple is integrating a degree of ‘noise’ into the vouchers such that the company claims that some vouchers will deliberately, and falsely, indicate they are associated with CSAM. The intent is to prevent Apple from attributing the presence of these vouchers as definitively linked to CSAM content before they have been decrypted and manually reviewed by an Apple employee.

Once on Apple’s servers the safety vouchers will be interrogated by the company’s systems. None of the vouchers are readable by Apple until a threshold of safety vouchers indicating the presence of CSAM is reached; Apple is not disclosing this threshold to limit individuals from ‘gaming’ the system. Only once the threshold is reached will the photos with CSAM vouchers be decrypted and sent for manual review to Apple employees.6 Those employees, if they discover that the photos contain CSAM content, will subsequently disable the user’s account and forward a report to the National Centre for Missing & Exploited Children (NCMEC). Users who believe that their account has been disabled in error can file a complaint to Apple.

Apple’s CSAM detection system will first be deployed in the United States though Apple has confirmed that they will, “consider any potential global expansion of the system on a country-by-country basis after conducting a legal evaluation.” The company has not provided a timeline for global expansions of the system. It is reasonable, however, to expect that Apple will expand the countries this system will operate within, especially as the company faces pressure to report CSAM and other illicit materials to government agencies.

Apple states that its monitoring for CSAM content will exclusively be for content that has been positively identified as constituting CSAM. The company’s systems will not be monitoring for potential CSAM content. Presumably Apple will continue to receive updated hashes, over time, from NCMEC and other child safety organizations and those hashes will be added to the existing hash lists which will be securely stored on users’ devices.7

Apple frames their surveillance approach as a more privacy-protective way of assessing whether an individual has been uploading CSAM content than analyzing all of the content that individuals store on Apple’s servers. The company has unilaterally come to this conclusion on the basis that the new iCloud Photos surveillance system will leave files encrypted on the company’s servers–and secure from Apple’s examination of them–save for in cases where a sufficient number of safety vouchers triggers a decryption of the CSAM-associated photos, and subsequent manual analysis of the potential CSAM content.

It is worth noting that Apple’s current iCloud configuration means that while the company may have decided to not analyze additional images they have the capability to do so because they hold the decryption keys for images stored on their servers. In a significant way, while Apple’s new surveillance approach pushes some of the analysis and processing down the client level for first-level analysis of images, the server infrastructure will continue to be designed such that Apple can decrypt much of the content that their users have uploaded to iCloud when using the company’s services.8 Furthermore, Apple has not indicated that it cease its current selective server-side analysis for CSAM content for applications that are not end-to-end encrypted, and which are presently monitored for CSAM content, such as their Mail product.

Potential Problems and Complications of Apple’s CSAM Surveillance

Apple staff have indicated in press interviews that they are aware of the gravity of their decision to deploy this CSAM surveillance infrastructure. Nonetheless it is important to outline how Apple’s unilateral decision may lead to a series of more and less obvious potential problems and complications. As an aside, I do not delve deeply into the nuanced technical details of Apple’s system in the subsequent sub-sections given that other experts are better suited to engage in such assessments, and are engaged in such analyses elsewhere.

Reputable Child Safety Agencies

Apple’s iCloud Photos CSAM detection system will initially only apply to iOS and iPadOS devices which have been upgraded to version 15 or later, and only to users who are registered as American users. In the United States NCMEC is the only non-governmental agency that is authorized to retain CSAM content. NCMEC is responsible for generating the hashes that are often used to algorithmically assess whether content that is shared or hosted on companies’ services or systems has been identified as CSAM.9

Apple has indicated that it will, on a case-by-case basis, expand the countries where it will apply automated CSAM monitoring of iCloud Photos content. As of writing, the company has not published the principles that will guide this case-by-case assessment nor how it will determine which non-NCMEC organizations can contribute to the hash list that will be deployed to all Apple devices. Apple has, however, made clear that they will be using a uniform global hash list that is comprised of hashes from all enrolled child safety organizations.10 There will not, seemingly, be unique lists for different jurisdictions.

In addition to clarifying the principles used to enrol new child safety organizations, and their hashes, into their surveillance infrastructure Apple will also need to assess to whom it will disclose its CSAM reports when detecting the material in non-US jurisdictions. It is plausible that the company may rely on contacts it has already developed for its current CSAM reporting regime but, to date, information on how Apple will determine whom to send these reports is not publicly available.

In aggregate, how Apple will decide which non-US organizations can provide hashes to include in Apple devices and to whom it will send its CSAM reports remains unknown. After developing (or implementing) its assessment criteria there is the prospect that certain jurisdictions will be unable to avail themselves of Apple’s CSAM surveillance infrastructure should a jurisdiction lack an organization that Apple considers suitable to receive reports. While, in some cases, this may exclude non-democratic or low rule-of-law jurisdictions it might also include ones that lack the capacity to establish and run an organization equivalent to NCMEC. The roll-out of Apple’s CSAM surveillance, then, may be very uneven with the effect that certain Western and Western-aligned nations can benefit from semi-automated reporting whereas others cannot.

Region Shifting to Avoid CSAM Surveillance

Apple’s CSAM surveillance will initially be applied to users located in the United States with the prospect of Apple extending the system to other countries over time. Given that numerous countries are adopting, or examining the passage of, online harms legislation that would extend existing requirements on providers to monitor for, and report on, the existence of CSAM content we can reasonably expect Apple’s CSAM surveillance to expand to new jurisdictions in the near future. There are, however, some countries that Apple may not expand their CSAM surveillance system into either for lack of a local governmental or non-governmental body to report CSAM content, or because a reporting agency in a given jurisdiction is largely ineffective or is corrupt.

If Apple continues to clearly declare the jurisdictions wherein the CSAM surveillance infrastructure is deployed then individuals who are motivated to store CSAM content using iCloud Photos may be able to evade the surveillance by creating user accounts for jurisdictions that have not been enrolled in the CSAM surveillance regime. Today, the Apple user ID system already has the ability to ‘tattoo’ an individual as belonging some jurisdictions, which may suggest that choosing a certain region at the time of account creation could affect how Apple applies its CSAM surveillance infrastructure.11 If individuals can evade surveillance in this manner it will remain to be seen how efficacious Apple’s CSAM surveillance actually is in detecting the storage of CSAM content in iCloud Photos.

Ongoing Harms to CSAM Survivors

Individuals who have been subject to child sexual abuse, and had materials documenting that abuse created by their abusers, may have unique physical characteristics such as moles, birthmarks, or other features. Apple’s system of assessing images to detect registered CSAM has not been adversarially tested and thus it is unclear whether individuals who have been subject to abuse and had photos taken of them as adults could have that adult content identified as CSAM depending on what the on-device perceptual hashing functions are examining.12 It is inappropriate for CSAM survivors to have to worry about whether privately owned devices, which are backing up photos that include their bodies, could be implicated in exposing their images and bodies to Apple employees when those employees conduct manual assessments of images prior to sending a report to NCMEC. In short, Apple’s system may risk further harming CSAM survivors by the very act of looking non-CSAM photos of survivors without their meaningful and explicit consent.

Of note, even should this kind of reporting not occur individuals may be less willing to engage in lawful behaviour–being in photographs where they have consented to the photo being taken–on the basis of fearing that they will be further harmed as a result of Apple’s reporting systems. The fear of surveillance, in itself, can chill legitimate and lawful personal activity. Regardless of the protections that Apple has put in place the very existence and operation of the CSAM surveillance system could negatively affect individuals in how they experience their daily lives, to the point of seeking to avoid being photographed to prevent traumatizing experiences linked with how their now-adult body could be incorrectly identified as CSAM-related content.

Finally, should unique physical characteristics in CSAM content also be flagged when detected in images of adult bodies there is also a risk that photographers who may be taking those images could be flagged as possessing CSAM content when their images are uploaded to iCloud Photos. Depending on Apple’s manual review processes this could have the effect of revealing a photographed person’s survivor status, such as if a photographer’s iCloud account were disabled and a grievance process were initiated by the affected photographer who then learned that individuals they have photographed were victims of child abuse. Apple has not presented information on how these kinds of edge cases might be handled, which underscores the company’s failure to adequately explain how it will be analyzing what have historically been private photo collections with an intent of reporting specific kinds of illicit content to government or government-associated organizations.

Expansions of Monitored Content

Perhaps the loudest concern that has been raised is that Apple’s CSAM surveillance infrastructure might be expanded in the future to facilitate surveillance in excess of CSAM content. In comments provided to MacRumours, Apple:

… addressed the hypothetical possibility of a particular region in the world deciding to corrupt a safety organization in an attempt to abuse the system, noting that the system’s first layer of protection is an undisclosed threshold before a user is flagged for having inappropriate imagery. Even if the threshold is exceeded, Apple said its manual review process would serve as an additional barrier and confirm the absence of known CSAM imagery. Apple said it would ultimately not report the flagged user to NCMEC or law enforcement agencies and that the system would still be working exactly as designed.

Apple’s explanation for how it would react should inappropriate material be fraudulently inserted into CSAM hash lists fails to engage with the core concerns raised by its critics. First, if a jurisdiction’s law requires Apple to report not just CSAM but other ‘illicit’ or ‘socially objectionable’ content (e.g., LBGTQ2 images, violent imagery or imagery designed to elicit violence, or sacrilegious images such as of the Prophet Muhammad that have been hashed and injected into a CSAM hash list) will the company refuse to report these materials? If we set aside non-democratic or low rule of law jurisdictions, what would happen if the United States, the United Kingdom, Canada, India, Australia, France, or similar countries pass laws which compel companies to report the presence of specific kinds of violent imagery to authorities, where those images had been injected into CSAM hash lists?

Apple’s head of privacy, Erik Neuenschwander, has stated that the company will not report on obviously legal content, such as if NCMEC has added hashes that are about materials that were legal to possess. However, it is unclear how the company will respond should hashes be added in excess of CSAM content where that excess content is illegal in a given jurisdiction. Finally, given that Apple will apparently be operating a global hash list across all of its devices, any government that compels the insertion of a hash or series of hashes for non-CSAM content will result in monitoring for that non-CSAM content in all Apple devices that have the CSAM surveillance activated, not just in the jurisdiction that compelled the insertion of the hash(es). Moreover, the ‘safety’ of Apple’s surveillance infrastructure, as described, presumes that Apple’s own internal system are not compromised: should a state or state-adjacent operator successfully access Apple’s internal systems they might be notified when content is identified by the CSAM infrastructure and take action on individuals who uploaded it even if Apple declines to forward the material to a relevant government agency.

Second, Apple’s assertion that a certain amount of content must be identified as CSAM in order to flag an assessment presumes that the company can resist governmental pressures to modify the number of safety vouchers that must be linked to CSAM, or other monitored for content, before flagging the user and their account for review. Apple has previously buckled under pressure to the Chinese government, including storing iCloud content in servers accessible to Chinese state authorities, limiting the availability of privacy and security software, and modifying the cryptographic systems and policies they use in China to comply with Chinese government requirements. The company has also modified some limited elements of its operating system at the behest of the Russian government. Apple has also reportedly not provided its non-Chinese users with higher levels of security due to pressures from the FBI and the company was identified as a participant in the NSA’s PRISM program.

Each of the aforementioned cases indicate that Apple is willing to adopt policies which are out of alignment with its privacy- and security-friendly corporate reputation and is willing to modify its practices out of concern of local laws, government threats, or risks that it will be denied the ability to operate in significant economic markets. Furthermore, in a related but separate incident, HushMail was required to modify its services following the issuance of a Mutual Legal Assistance Treaty request linked to a US court order to selectively enable a law enforcement investigation; Apple’s surveillance infrastructure might be similarly repurposed by law enforcement agencies to search for other images that are of interest to governments but are not CSAM content. In a related vein, in the United Kingdom the censorship system that Internet service providers were required to put in place to prevent access to CSAM content was repurposed to block access to copyright infringing material as a result of a court decision.13 There is no self-evident reason why Apple, as well, might not be on the receiving end of judicial requirements that extend the scope of its surveillance infrastructure.

In aggregate, while Apple has previously maintained that it would resist efforts to create new cryptographic backdoors into its products, the company’s actual business practices reveal that it can and will modify its business practices and the security and privacy afforded to its users when doing so is either required by law or to ensure ongoing access to different global markets. Pressures on Apple will increase as a result of their decision to implement a prospectively global CSAM surveillance infrastructure. Government agencies in the United States and around the world can be expected to try and compel Apple to use the newly minted CSAM surveillance system for non-CSAM content.

Weaponization of Automated CSAM Detection Systems

Apple’s mobile operating systems are increasingly common targets for sophisticated state and non-state threat actors alike. Malware which can evade Apple’s on-device protections can potentially both exfiltrate and install files on an individual’s device.

Oppressive governments target the mobile phones of journalistshuman rights defenderslawyersopposition politiciansdissidents, and even children in their efforts to exert control over, or harm, those who are targeted. These governments’ surveillance agencies have sometimes been created in partnership with former intelligence officers from other countries and, in other cases, rely on commercial surveillance and malware vendors to reach into targeted persons’ devices.

Broadly, governments could potentially use computer exploits to obtain remote access to devices and, subsequently, place known-CSAM content on the targeted devices for the purposes of triggering Apple’s iCloud voucher system with the goal of the targeted person being subject to police investigations concerning their possession of CSAM. They might hide the presence of this material by indicating that the photos should be hidden in the Photos app, preventing a user from realizing that the content has been added to their library.

Some governments, such as the Russian government, are well known for using legal processes to harass individuals who have opposed the government or spoken out against its oligarchs. The CIA has previously subverted vaccination programs to collect DNA, revealing that even agencies in democratic countries can flagrantly subvert socially beneficial programs to ostensibly advance national security activities. The reporting system that Apple is lionizing, in effect, may be subject to gaming by governments that are willing to behave illegally or illicitly to harass and defame individuals, or to otherwise advance national security objectives. Apple will not be situated to determine whether detected CSAM content genuinely belongs to an individual or was placed on an individual’s device by a malicious third-party, meaning that the company may ultimately become implicated in legitimizing what are illicit or illegal efforts to control or harm persons who are being targeted by state-motivated actors.

Discussion

Apple has designed a system to monitor for content on a per-jurisdiction basis, as showcased by the fact that the CSAM surveillance system will first be applied exclusively to its American customers and users. Apple has deployed this system despite having not previously scanned iCloud Photos content for known CSAM content using a service such as PhotoDNA, meaning that the system has been created and will be deployed with no clear way of determining if CSAM content is genuinely an issue for Apple’s user base. A surveillance infrastructure has been build without a publicly proven need for it to exist.

Apple’s actions will almost certainly encourage countries to further pressure Apple and other technology companies that sell devices or operate cloud infrastructures to monitor for, and take down, content that is found to be illegal in the jurisdiction in question. Some of that content will be related to CSAM but much of it will not. To be clear: this is a threat with all cloud-based infrastructures. But, at the same time, it means that even should Apple someday introduce an ‘end-to-end encrypted’ iCloud system that encryption system will have a gaping backdoor built into it from the outset. Further, while Apple has created a surveillance system that does rely, in part, on server-side analysis this is not a clearly required property of their CSAM-surveillance infrastructure. In theory, a future version of the infrastructure could operate exclusively on-device and thus identify all problematic content and not just that which was uploaded to iCloud Photos. What is stopping this, currently, is Apple’s policies as opposed to a technical inability to do so.14

Furthermore, Apple’s unilateral decision that this surveillance system will be integrated into its mobile operating systems and pushed to all Americans without engaging in public consultations speaks to a broader problem of how the company interacts with government and civil society alike. Apple has been under pressure to modify how it monitors for whether its users are sending or storing CSAM content using Apple’s services but the decisions the company has made ignore the broader implications of what this system truly represents: a corporate fiat of how it will address a pernicious social ill instead of the result of a consultation in good faith with a diverse set of stakeholders who have often strongly supported Apple for its efforts to champion user privacy and security.

Apple will initially exclusively monitor for CSAM that is uploaded into iCloud Photos, which means that individuals who deliberately possess CSAM may simply choose to store their content in other parts of their iPhone. Apple has, however, also stated that its efforts, “will evolve and expand over time.” We should expect that the company will expand their surveillance throughout the applications where CSAM content might be stored (e.g., Notes, Voice Memos, etc) and potentially to non-Apple applications on Apple’s mobile devices which rely on iCloud storage.

Cloud providers have conducted differing degrees of surveillance of their users’ content for some time. To some extent, Apple’s CSAM surveillance might be misinterpreted as just another mode of such surveillance but such an argument would miss the fact that individuals’ own devices are now recruited into conducting surveillance of themselves. Apple, along with other companies, have increasingly provided a series of services to backup deeply personal information with the promise that the company will protect and secure that information. In the case of photos, they were uploaded to iCloud Photos to keep them safe. Apple has turned the tables, now, insofar as they have built a system which will assess whether the content that is on-device and uploaded is legal or illegal. Apple’s head of privacy has assured Apple customers that if they aren’t doing anything illegal then they have nothing to fear, without acknowledging that what constitutes monitored illegal behaviour may expand as Apple’s surveillance infrastructure grows (perhaps as a result of government compulsion) to watch for increasingly large volumes of illegal material.15

Apple has shown, in the past, that it will bend the knee to democratic and authoritarian governments alike. It may modify its client-server surveillance in the future to appease those same governments. Democratic governments may also call for such changes to ostensibly better monitor CASM content; now that individuals know that they just need to disable iCloud to prevent their photos from being flagged by Apple as CSAM content, democratic governments may call on Apple to revise its current practices to better identify individuals who’s devices hold CASM. While Apple may have meant to create a line in the sand that it will not cross, it may not be a line that the company can hold. What prevents expansions of surveillance are, now, largely based on internal corporate policy and such policy can change with new leadership or as a result of governmental or judicial pressures. Further, Apple along with other major technology giants ultimately participated in surveillance activities that were revealed by Edward Snowden. There is no reason why the company might not be forced to quietly change how its surveillance system operates sometime in the future in the name of American national security. Apple’s own assertion that it will not undertake certain surveillance activities are belied by how it commits to protecting human rights when the company states that:

…we’re convinced the best way we can continue to promote openness is to remain engaged, even where we may disagree with a country’s laws…We’re required to comply with local laws, and at times there are complex issues about which we may disagree with governments and other stakeholders on the right path forward. With dialogue, and a belief in the power of engagement, we try to find the solution that best serves our users—their privacy, their ability to express themselves, and their access to reliable information and helpful technology.

Apple, “Our Commitment to Human Rights” (August 2020)

In short, while the company may disagree and be opposed to surveillance obligations that states may impose onto Apple’s CSAM surveillance infrastructure, the company believes that engagement is the way that can best serve its users. Apple is not drawing a line in the sand, even in their human rights commitments, and may be compelled to expand what is subject to surveillance while continuing to assure its customers that doing so remain compliant with their self-declared human rights commitments. Apple’s willingness to provide less-secured iCloud storage systems for its Chinese users is a clear demonstration that it regards ‘engagement’ (and its profits and manufacturing base) as more important than potentially being implicated in repressive state behaviours.

Lastly, Apple’s efforts to detect and report CSAM are often regarded as a way to solve for the problem of the existence of this content. But law enforcement agencies, today, are notified tens or hundreds of millions of time each year about individuals who possess CSAM content. In reporting the presence of these materials Apple may escape public scrutiny insofar as the New York Times will not condemn their failure to send reports to NCMEC but Apple’s actions will do little to necessarily address the creation or proliferation of CSAM content itself. Family members, close family friends, or persons in authority are responsible for the vast majority of child sexual abuse. A considerable amount of CSAM content is created in impoverished nations or regions of countries where the creation and sale of CSAM content, and making available of children from sexual abuse, is regarded as a desperate way to earn money. Apple’s reporting of the presence of CSAM content will do little to address the deep socio-economic rationales that drive the creation of much of this content. No one should think that simply because Apple reports more cases of CSAM content to child abuse agencies that there will be a decrease in how regularly this material is created or the regularity at which children are abused.

Conclusion

The creation, propagation, and storage of CSAM content is a serious issue and one which technology providers have a role to play in addressing. It is not self-apparent what that role is, however, given the concerns about how different decisions that companies take can lead to second- and third-order consequences. Efforts such as those led out of Stanford University have sought to bring stakeholders together to think through how to provide strong privacy and security assurances while simultaneously addressing ills associated with CSAM, violent and extremist content, and other socially undesirable and often illegal content and behaviours. Such stakeholdering efforts should continue and be congratulated.

Apple has chosen to not involve itself in these stakeholdering meetings and instead has self-congratulated its internal teams for developing and implementing its new forthcoming CSAM surveillance system. The company’s unwillingness to engage the broader community that has been assessing how to best address the ill of CSAM content is the definition of corporate irresponsibility. Apple has also largely failed to substantively engage with critiques of its CSAM surveillance system and how it might be compelled to modify the system, or how the development of this system could create broader pressures for other companies.

In the end, Apple has entirely reshaped the tenor and scope of the debate around the kinds of surveillance that governments can expect companies to conduct on their behalf. Apple has unilaterally chosen to enrol its users in a global experiment of mass surveillance, seemingly under-estimated the potential costs this could have on individuals who are not involved in the manufacture or storage of CSAM content, and externalized any such costs onto a user base of one billion plus individuals around the world. These are not the activities of a company that has meaningfully reflected on the weight of its actions but, instead, are reflectively of a company that is willing to sacrifice its users without adequately balancing their privacy and security needs.


Footnotes

  1. In the United States the agency tends to be Homeland Security Investigations (HSI). HSI takes the lead because child exploitation abuse materials were historically linked to smuggling and trafficking, and they now also deal with cyber-related CSAM and trafficking matters as well. ↩︎
  2. I have uploaded a .pdf version of this document in case it changes between the time of writing and when you read this post. The document was accessed from Apple’s website on August 6, 2011. ↩︎
  3. Apple publishes how it complies with American (locally hosted .pdf) and non-American (locally hosted .pdf) government requests. ↩︎
  4. I’ve uploaded a .pdf of the CSAM Detection: Technical Summary document in case the URL changes or the document becomes unavailable in the future. ↩︎
  5. For a high-level discussion of this, see Jonathan Mayer’s Twitter thread on Apple’s perceptual hash functions and computer vision. For a deeper analysis of perceptual hashing for monitoring for harmful media, including the limitations of such technical modes of analysis, see Anunay Kulshrestha’s and Jonathan Mayer’s newest paper, “Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership Computation.” ↩︎
  6. As discussed by Sarah Jamie Lewis, the possibility of false positives could be well in excess of Apple’s claimed ‘1 in a trillion’ likelihood’. ↩︎
  7. As of writing it has been reported that the hash list will include between two and three hundred thousand images. It’s unclear at this time if Apple will only monitor for the worst-of-the-worst CSAM content (i.e., A1 material), if their hash lists will expand to exclusively include more of the worst materials, or if they will expand their lists to include CSAM content that was not initially included in the detection system. ↩︎
  8. It is possible that Apple’s attempts to detect CSAM content using device-based analysis is the lead-up to the company extending end-to-end encryption to more content that is stored in iCloud but, as of writing, this is only a hypothesis for why Apple has implemented its new surveillance features. ↩︎
  9. For a good discussion about Apple’s newly announced features, as well as details of how NCMEC operates, watch or listen to a conversation between Alex Stamos, Riana Pfefferkorn, David Thiel, and Matthew Green that occurred on August 5, 2021. ↩︎
  10. Per Erik Neuenschwander, Apple’s head of privacy, “The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.” ↩︎
  11. As an example, it has historically been the case that if an Apple user first creates their Apple ID in China their status as a Chinese user tracks with them even if they move to a Western country or obtain permanent residency in a country outside China. To date, it is an under-explored research question as to where that user’s customer data is stored if they are using Apple services while outside China, such as when abroad as an international student. ↩︎
  12. I first encountered this line of analysis in a thread by Avi↩︎
  13. “241. The second aspect of the debate concerns the consequences, and in particular the consequences for the ISPs in terms of the costs of implementation, of that approach. So far as this aspect of the matter is concerned, the ISPs did not seriously dispute that the cost of implementing a single website blocking order was modest. As I have explained above, the ISPs already have the requisite technology at their disposal. Furthermore, much of the capital investment in that technology has been made for other reasons, in particular to enable the ISPs to implement the IWF blocking regime and/or parental controls. Still further, some of the ISPs’ running costs would also be incurred in any event for the same reasons. It can be seen from the figures I have set out in paragraphs 61-65 above that the marginal cost to each ISP of implementing a single further order is relatively small, even once one includes the ongoing cost of keeping it updated.” For more, see 2014 EWHC 3354 (Ch) ↩︎
  14. I recognize that re-engineering would need to take place for pure on-device surveillance but there is not a self-evident reason why Apple could not do so, outside of policy reasons that might change over the course of the company’s operations. ↩︎
  15. Of note, this is a classic argument that is made by groups seeking to undermine privacy generally: “If you have nothing to hide then, you have nothing to fear.” ↩︎