On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.
In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.
Sexually Explicit Image Surveillance in Messages
Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.
Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.
Last week I appeared before the Special Committee on Canada-Chinese Relations to testify about the security challenges posed by Chinese infrastructure vendors and communications intermediaries. . I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If so interested, my oral comments are available to download, and what follows in this post is the actual brief which was submitted.
I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. I submit these comments in a professional capacity representing my views and those of the Citizen Lab.
Successive international efforts to globalize trade and supply chains have led to many products being designed, developed, manufactured, or shipped through China. This has, in part, meant that Chinese companies are regularly involved in the creation and distribution of products that are used in the daily lives of billions of people around the world, including products that are integrated into Canadians’ personal lives and the critical infrastructures on which they depend. The Chinese government’s increasing assertiveness on the international stage and its belligerent behaviours, in tandem with opaque national security laws, have led to questioning in many Western countries of the extent to which products which come from China can be trusted. In particular, two questions are regularly raised: might supply chains be used as diplomatic or trade leverage or, alternately, will products produced in, transited through, or operated from China be used to facilitate government intelligence, attack, or influence operations?
For decades there have been constant concerns about managing technology products’ supply chains. In recent years, they have focused on telecommunications equipment, such as that produced by ZTE and Huawei, as well as the ways that social media platforms such as WeChat or TikTok could be surreptitiously used to advance the Chinese government’s interests. As a result of these concerns some of Canada’s allies have formally or informally blocked Chinese telecommunications vendors’ equipment from critical infrastructure. In the United States, military personnel are restricted in which mobile devices they can buy on base and they are advised to not use applications like TikTok, and the Trump administration aggressively sought to modify the terms under which Chinese social media platforms were available in the United States marketplace.
Legislators and some security professionals have worried that ZTE or Huawei products might be deliberately modified to facilitate Chinese intelligence or attack operations, or be drawn into bilateral negotiations or conflicts that could arise with the Chinese government. Further, social media platforms might be used to facilitate surveillance of international users of the applications, or the platforms’ algorithms could be configured to censor content or to conduct imperceptible influence operations.
Just as there are generalized concerns about supply chains there are also profound worries about the state of computer (in)security. Serious computer vulnerabilities are exposed and exploited on a daily basis. State operators take advantage of vulnerabilities in hardware and software alike to facilitate computer network discovery, exploitation, and attack operations, with operations often divided between formal national security organs, branches of national militaries, and informal state-adjacent (and often criminal) operators. Criminal organizations, similarly, discover and take advantage of vulnerabilities in digital systems to conduct identity theft, steal intellectual property for clients or to sell on black markets, use and monetize vulnerabilities in ransomware campaigns, and otherwise engage in socially deleterious activities.
In aggregate, issues of supply chain management and computer insecurity raise baseline questions of trust: how can we trust that equipment or platforms have not been deliberately modified or exploited to the detriment of Canadian interests? And given the state of computer insecurity, how can we rely on technologies with distributed and international development and production teams? In the rest of this submission, I expand on specific trust-related concerns and identify ways to engender trust or, at the very least, make it easier to identify when we should in fact be less trusting of equipment or services which are available to Canadians and Canadian organizations.
This article is an exploratory study of the influence of beat and employment status on the information security culture of journalism (security-related values, mental models, and practices that are shared across the profession). The study is based on semi-structured interviews with 16 journalists based in Canada in staff or freelance positions working on investigative or non-investigative beats. We find that journalism has a multitude of security cultures that are influenced by beat and employment status. The perceived need for information security is tied to perceptions of sensitivity for a particular story or source. Beat affects how journalists perceive and experience information security threats. Investigative journalists are concerned with surveillance and legal threats from state actors including law enforcement and intelligence agencies. Non-investigative journalists are more concerned with surveillance, harassment, and legal actions from companies or individuals. Employment status influences the perceived ability of journalists to effectively implement information security. Based on these results we discuss how journalists and news organisations can develop effective security cultures and raise information security standards.
Photo by Marco Verch (CC BY 2.0) https://flic.kr/p/RjMXMP
The Government of Canada has historically opposed the calls of its western allies to undermine the encryption protocols and associated applications that secure Canadians’ communications and devices from criminal and illicit activities. In particular, over the past two years the Minister of Public Safety, Ralph Goodale, has communicated to Canada’s Five Eyes allies that Canada will neither adopt or advance an irresponsible encryption policy that would compel private companies to deliberately inject weaknesses into cryptographic algorithms or the applications that facilitate encrypted communications. This year, however, the tide may have turned, with the Minister apparently deciding to adopt the very irresponsible encryption policy position he had previously steadfastly opposed. To be clear, should the Government of Canada, along with its allies, compel private companies to deliberately sabotage strong and robust encryption protocols and systems, then basic rights and freedoms, cybersecurity, economic development, and foreign policy goals will all be jeopardized.
This article begins by briefly outlining the history and recent developments in the Canadian government’s thinking about strong encryption. Next, the article showcases how government agencies have failed to produce reliable information which supports the Minister’s position that encryption is significantly contributing to public safety risks. After outlining the government’s deficient rationales for calling for the weakening of strong encryption, the article shifts to discuss the rights which are enabled and secured as private companies integrate strong encryption into their devices and services, as well as why deliberately weakening encryption will lead to a series of deeply problematic policy outcomes. The article concludes by summarizing why it is important that the Canadian government walk back from its newly adopted irresponsible encryption policy.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).