On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.
In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.
Sexually Explicit Image Surveillance in Messages
Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.
Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.
NSIRA is responsible for conducting national security reviews of Canadian federal agencies, inclusive of “the Canadian Security Intelligence Service (CSIS) and the Communications Security Establishment (CSE), as well as the national security and intelligence activities of all other federal departments and agencies.” The expanded list of departments and agencies includes the Royal Canadian Mounted Police (RCMP), the Canada Border Services Agency (CBSA), the Department of National Defence (DND), Global Affairs Canada (GAC), and the Department of Justice (DoJ). As a result of their expansive mandate, the Agency has access to broad swathes of information about the activities which are undertaken by Canada’s national security and intelligence community.
Despite the potential significance of this breach, little has been publicly written about the possible implications of the unauthorized access. This post acts as an early round of analysis of the potential significance of the access by, first, outlining the kinds of information which may have been accessed by the unauthorized party and, then, raising a series of questions that remain unanswered in NSIRA’s statement. The answers to these questions may dictate the actual seriousness and severity of the cyber-incident.
What is Protected Information?
NSIRA’s unclassified information includes Protected information. Information is classified as Protected when, if compromised, it “could reasonably be expected to cause injury to a non-national interest—that is, an individual interest such as a person or an organization.” There are three classes of protected information that are applied based on the sensitivity of the information. Protected A could, if compromised, “cause injury to an individual, organization or government,” whereas compromising Protect B information could “cause serious injury.” Compromising Protected C information could “cause extremely grave injury”. Protected C information is safeguarded in the same manner as Confidential or Secret material which, respectively, could cause injury or could cause serious injury to “the national interest, defence and maintenance of the social, political, and economic wellbeing of Canada” in the case of either being compromised.
Intrusion into protected networks brings with it potentially significant concerns based on the information which may be obtained. Per Veterans Affairs, employee information associated with Protected A information could include ‘tombstone’ information such as name, home address, telephone numbers or date of birth, personal record identifiers, language test results, or views which if made public would cause embarrassment to the individual or organization. Protected B could include medical records (e.g., physical, psychiatric, or psychological descriptions), performance reviews, tax returns, an individual’s financial information, character assessments, or other files or information that are composed of a significant amount of personal information.
More broadly, Protected A information can include third-party business information that has been provided in confidence, contracts, or tenders. Protected B information in excess of staff information might include that which, if disclosed, could cause a loss of competitive advantage to a Canadian company or could impede the development of government policies such as by revealing Treasury Board submissions.
In short, information classified as Protected could be manipulated for a number of ends depending on the specifics of what information is in a computer network. Theoretically, and assuming that an expansive amount of protected information were present, the information might be used by third-parties to attempt to recruit or target government staff or could give insights into activities that NSIRA was interested in reviewing, or is actively reviewing. Further, were NSIRA either reviewing non-classified government policies or preparing such policies for the Treasury Board, the revelation of such information might advantage unauthorized parties by enabling them to predict or respond to those policies in advance of their being put in place.
Over the past several years I’ve undertaken research exploring how, how often, and for what reasons governments in Canada access telecommunications data. As one facet of this line of research I worked with Dr. Adam Molnar to understand the regularity at which policing agencies across Canada have sought, and obtained, warrants to lawfully engage in real-time electronic surveillance. Such data is particularly important given the regularity at which Canadian law enforcement agencies call for new powers; how effective are historical methods of capturing communications data? How useful are the statistics which are tabled by governments? We answer these questions in a paper published with the Canadian Journal of Law and Technology, entitled ‘Government Surveillance Accountability: The Failures of Contemporary Canadian Interception Reports.” The abstract, follows, as do links to the Canadian interception reports upon which we based our findings.
Real time electronic government surveillance is recognized as amongst the most intrusive types of government activity upon private citizens’ lives. There are usually stringent warranting practices that must be met prior to law enforcement or security agencies engaging in such domestic surveillance. In Canada, federal and provincial governments must report annually on these practices when they are conducted by law enforcement or the Canadian Security Intelligence Service, disclosing how often such warrants are sought and granted, the types of crimes such surveillance is directed towards, and the efficacy of such surveillance in being used as evidence and securing convictions.
This article draws on an empirical examination of federal and provincial electronic surveillance reports in Canada to examine the usefulness of Canadian governments’ annual electronic surveillance reports for legislators and external stakeholders alike to hold the government to account. It explores whether there are primary gaps in accountability, such as where there are no legislative requirements to produce records to legislators or external stakeholders. It also examines the extent to which secondary gaps exist, such as where there is a failure of legislative compliance or ambiguity related to that compliance.
We find that extensive secondary gaps undermine legislators’ abilities to hold government to account and weaken capacities for external stakeholders to understand and demand justification for government surveillance activities. In particular, these gaps arise from the failure to annually table reports, in divergent formatting of reports between jurisdictions, and in the deficient narrative explanations accompanying the tabled electronic surveillance reports. The chronic nature of these gaps leads us to argue that there are policy failures emergent from the discretion granted to government Ministers and failures to deliberately establish conditions that would ensure governmental accountability. Unless these deficiencies are corrected, accountability reporting as a public policy instrument threatens to advance a veneer of political legitimacy at the expense of maintaining fulsome democratic safeguards to secure the freedoms associated with liberal democratic political systems. We ultimately propose a series of policy proposals which, if adopted, should ensure that government accountability reporting is both substantial and effective as a policy instrument to monitor and review the efficacy of real-time electronic surveillance in Canada.
The Citizen Lab and the Canadian Internet Policy and Public Interest Clinic (CIPPIC) have released a joint collaborative report, “Shining a Light on the Encryption Debate: A Canadian Field Guide,” which was written by Lex Gill, Tamir Israel, and myself. We argue that access to strong encryption is integral to the defense of human rights in the digital era. Encryption technologies are also essential to securing digital transactions, securing public safety, and protecting national security interests. Unfortunately, many state agencies have continues to argue that encryption poses insurmountable or unacceptable barriers to their investigative- and intelligence-gathering activities. In response, some governments have advanced irresponsible encryption policies that would limit the public availability and use of secure, uncompromised encryption technologies.
Our report examines this encryption debate, paying particular attention to the Canadian context. It provides insight and analyses for policy makers, lawyers, academics, journalists, and advocates who are trying to understand encryption technologies and the potential viability and consequences of different policies pertaining to encryption.
Section One provides a brief primer on key technical principles and concepts associated with encryption in the service of improving policy outcomes and enhancing technical literacy. In particular, we review the distinction between encryption at rest and in transit, the difference between symmetric and asymmetric encryption systems, the issue of end-to-end encryption, and the concept of forward secrecy. We also identify some of the limits of encryption in restricting the investigative or intelligence-gathering objectives of the state, including in particular the relationship between encryption and metadata.
Section Two explains how access to strong, uncompromised encryption technology serves critical public interest objectives. Encryption is intimately connected to the constitutional protections guaranteed by the Canadian Charter of Rights and Freedoms as well as those rights enshrined in international human rights law. In particular, encryption enables the right to privacy, the right to freedom of expression, and related rights to freedom of opinion and belief. In an era where signals intelligence agencies operate with minimal restrictions on their foreign facing activities, encryption remains one of the few practical limits on mass surveillance. Encryption also helps to guarantee privacy in our personal lives, shielding individuals from abusive partners, exploitative employers, and online harassment. The mere awareness of mass surveillance exerts a significant chilling effect on freedom of expression. Vulnerable and marginalized groups are both disproportionately subject to state scrutiny and may be particularly vulnerable to these chilling effects. Democracies pay a particularly high price when minority voices and dissenting views are pressured to self-censor or refrain from participating in public life. The same is true when human rights activists, journalists, lawyers, and others whose work demands the ability to call attention to injustice, often at some personal risk, are deterred from leveraging digital networks in pursuit of their activities. Unrestricted public access to reliable encryption technology can help to shield individuals from these threats. Efforts to undermine the security of encryption in order to facilitate state access, by contrast, are likely to magnify these risks. Uncompromised encryption systems can thus foster the security necessary for meaningful inclusion, democratic engagement, and equal access in the digital sphere.
Section Three explores the history of encryption policy across four somewhat distinct eras, with a focus on Canada to the extent the Canadian government played an active role in addressing encryption. The first era is characterized by the efforts of intelligence agencies such as the United States National Security Agency (NSA) to limit the public availability of secure encryption technology. In the second era of the 1990s, encryption emerged as a vital tool for securing electronic trust on the emerging web. In the third era—between 2000 and 2010—the development and proliferation of strong encryption technology in Canada, the United States, and Europe progressed relatively unimpeded. The fourth era encompasses from 2011 to the present day where calls to compromise, weaken, and restrict access to encryption technology have steadily reemerged.
Section Four reviews the broad spectrum of legal and policy responses to government agencies’ perceived encryption “problem,” including historical examples, international case studies, and present-day proposals. The section provides an overview of factors which may help to evaluate these measures in context. In particular, it emphasizes questions related to: (1) whether the proposed measure is truly targeted and avoids collateral or systemic impacts on uninvolved parties; (2) whether there is an element of conscription or compelled participation which raises an issue of self-incrimination or unfairly impacts the interests of a third party; and (3) whether, in considering all the factors, the response remains both truly necessary and truly proportionate. The analysis of policy measures in this sections proceeds in three categories. The first category includes measures designed to limit the broad public availability of effective encryption tools. The second category reviews measures that are directed at intermediaries and service providers. The third category focuses on efforts that target specific encrypted devices, accounts, or individuals.
Section Five examines the necessity of proposed responses to the encryption “problem.” A holistic and contextual analysis of the encryption debate makes clear that the investigative and intelligence costs imposed by unrestricted public access to strong encryption technology are often overstated. At the same time, the risks associated with government proposals to compromise encryption in order to ensure greater ease of access for state agencies are often grossly understated. When weighed against the profound costs to human rights, the economy, consumer trust, public safety, and national security, such measures will rarely—if ever—be proportionate and almost always constitute an irresponsible approach to encryption policy. In light of this, rather than finding ways to undermine encryption, the Government of Canada should make efforts to encourage the development and adoption of strong and uncompromised technology.
This research was led by the Citizen Lab at the Munk School of Global Affairs, University of Toronto, as well as the Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa. This project was funded, in part, by the John D. And Catherine T. MacArthur Foundation and the Ford Foundation.
The authors would like to extend their deepest gratitude to a number of individuals who have provided support and feedback in the production of this report, including (in alphabetical order) Bram Abramson, Nate Cardozo, Masashi Crete-Nishihata, Ron Deibert, Mickael E.B., Andrew Hilts, Jeffrey Knockel, Adam Molnar, Christopher Prince, Tina Salameh, Amie Stepanovich, and Mari Jing Zhou. Any errors remain the fault of the authors alone.
We are also grateful to the many individuals and organizations who gave us the opportunity to share early versions of this work, including Lisa Austin at the Faculty of Law (University of Toronto); Vanessa Rhinesmith and David Eaves at digital HKS (Harvard Kennedy School); Ian Goldberg and Erinn Atwater at the Cryptography, Security, and Privacy (CrySP) Research Group (University of Waterloo); Florian Martin-Bariteau at the Centre for Law, Technology and Society (University of Ottawa); and the Citizen Lab Summer Institute (Munk School of Global Affairs, University of Toronto).
Lex Gill is a Citizen Lab Research Fellow. She has also served as the National Security Program Advocate to the Canadian Civil Liberties Association, as a CIPPIC Google Policy Fellow and as a researcher to the Berkman Klein Center for Internet & Society at Harvard University. She holds a B.C.L./LL.B. from McGill University’s Faculty of Law.
Tamir Israel is Staff Lawyer at the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic at the University of Ottawa, Faculty of Law. He leads CIPPIC’s privacy, net neutrality, electronic surveillance and telecommunications regulation activities and conducts research and advocacy on a range of other digital rights-related topics.
Christopher Parsons is currently a Research Associate at the Citizen Lab, in the Munk School of Global Affairs with the University of Toronto as well as the Managing Director of the Telecom Transparency Project at the Citizen Lab. He received his Bachelor’s and Master’s degrees from the University of Guelph, and his Ph.D from the University of Victoria.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).