Last week I appeared before the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics to testify about the public and private policy implications of PHAC’s use of mobility information since March 2020. I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If interested, my oral comments are available to download. What follows in this post is the content of the brief which was submitted.
I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. While I submit these comments in a professional capacity they do not necessarily represent the full views of the Citizen Lab.
For the better part of twenty years, law enforcement agencies in Canada have sought warrantless access to subscriber data that is held by telecommunications service providers and other organizations. The rationale has been that some baseline digital identifiers are needed to open investigations into alleged harms or criminal activities that have a digital nexus. Only once these identifiers are in hand can an investigation bloom. However, due to the time that it takes to obtain a relevant court order, as well as challenges in satisfying a judge or justice that there is a legitimate need to obtain these identifiers in the first place, these same agencies recurrently assert that an initial set of seed digital identifiers should be disclosed to officers absent a court order.
The Government of Canada has, once more, raised the prospect of law enforcement officers obtaining subscriber or transmission data without warrant when undertaking activities intended to “enhance efforts to curb child pornography.” This time, the argument that such information should be made available is in the context of combatting online harms. The government has heard that companies should include basic subscriber or transmission data in their child pornography-related reports to law enforcement, with the effect of law enforcement agencies getting around the need to obtain a warrant prior to receiving this information.
In this post I start by discussing the context in which this proposal to obtain information without warrant has been raised, as well as why subscriber and transmission data can be deeply revelatory. With that out of the way, I outline a series of challenges that government agencies regularly experience but which tend not to be publicly acknowledged in the warrantless access debates associated with child sexual abuse material (CSAM). It is only with this broader context and awareness of the challenges facing government agencies in mind that it becomes apparent that warrantless access to subscriber or transmission data cannot ‘solve’ the issues faced by agencies which are responsible for investigating CSAM offences. To develop appropriate policy solutions, then, we must begin by acknowledging all of the current obstacles to investigating these offences. Only then can we hope to develop proportionate policy solutions.
Just before Christmas, Swikar Oli published an article in the National Post that discussed how the Public Health Agency of Canada (PHAC) obtained aggregated and anonymized mobility data for 33 million Canadians. From the story, we learn that the contract was awarded in March to TELUS, and that PHAC used the mobility data to “understand possible links between movement of populations within Canada and spread of COVID-19.”
Around the same time as the article was published, PHAC posted a notice of tender to continue collecting aggregated and anonymized mobility data that is associated with Canadian cellular devices. The contract would remain in place for several years and be used to continue providing mobility-related intelligence to PHAC.
Separate from either of these means of collecting data, PHAC has been also purchasing mobility data “… from companies who specialize in producing anonymized and aggregated mobility data based on location-based services that are embedded into various third-party apps on personal devices.” There has, also, been little discussion of PHAC’s collection and use of data from these kinds of third-parties, which tend to be advertising and data surveillance companies that consumers have no idea are collecting, repackaging, and monetizing their personal information.
There are, at first glance, at least four major issues that arise out of how PHAC has obtained and can use the aggregated and anonymized information to which they have had, and plan to have, access.
On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of the devices that it sells in a stated intent to expand protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images using the Messages application. The second will monitor for the presence of Child Sexual Abuse Material (CSAM) in iCloud Photos. The third will monitor for searches pertaining to CSAM. These features are planned to be activated in the United States in the next versions of Apple’s operating systems which will ship to end-users in the fall of 2021.
In this post I focus exclusively on the surveillance of iCloud Photos for CSAM content. I begin with a background of Apple’s efforts to monitor for CSAM content on their services before providing a description of the newly announced CSAM surveillance system. I then turn to outline some problems, complications, and concerns with this new child safety feature. In particular, I discuss the challenges facing Apple in finding reputable child safety organizations with whom to partner, the potential ability to region-shift to avoid the surveillance, the prospect of the surveillance system leading to ongoing harms towards CSAM survivors, the likelihood that Apple will expand the content which is subject to the company’s surveillance infrastructure, and the weaponization of the CSAM surveillance infrastructure against journalists, human rights defenders, lawyers, opposition politicians, and political dissidents. I conclude with a broader discussion of the problems associated with Apple’s new CSAM surveillance infrastructure.
On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.
In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.
Sexually Explicit Image Surveillance in Messages
Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.
Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.
On May 12, 2021, President Joseph Biden promulgated an Executive Order (EO) to compel federal agencies to modify and enhance their cybersecurity practices. In this brief post I note a handful of elements of the EO that are noteworthy for the United States and, also, more broadly can be used to inform, assess, and evaluate non-American cybersecurity practices.
The core takeaway, for me, is that the United States government is drawing from its higher level strategies to form a clear and distinct set of policies that are linked to measurable goals. The Biden EO is significant in its scope though it remains unclear whether it will actually lead to government agencies better mitigating the threats which are facing their computer networks and systems.