Finding You: The Network Effect of Telecommunications Vulnerabilities for Location Disclosure

Last week, I published a report with Gary Miller and the Citizen Lab entitled, “Finding You: The Network Effect of Telecommunications Vulnerabilities for Location Disclosure.” I undertook this research while still employed by the Citizen Lab and was delighted to see it available to the public. In it, we discuss how the configuration and vulnerabilities of contemporary telecommunications networks enables surveillance actors to surreptitiously monitor the location of mobile phone users.

The report provides a high-level overview of the geolocation-related threats associated with contemporary networks that depend on the protocols used by 3G, 4G, and 5G network operators, followed by evidence of the proliferation of these threats. Part 1 provides the historical context of unauthorized location disclosures in mobile networks and the importance of the target identifiers used by surveillance actors. Part 2 explains how mobile networks are made vulnerable by signaling protocols used for international roaming, and how networks are made available to surveillance actors to carry out attacks. An overview of the mobile ecosystem lays the foundation for the technical details of domestic versus international network surveillance, while the vectors of active versus passive surveillance techniques with evidence of attacks shows how location information is presented to the actor. Part 3 provides details of a case study from a media report that shows evidence of widespread state-sponsored surveillance, followed by threat intelligence data revealing network sources attributed to attacks detected in 2023. These case studies underscore the significance and relevance of undertaking these kinds of surveillance operations.

Deficiencies in oversight and accountability of network security are discussed in Part 4. This includes outlining the incentives and enablers that are provided to surveillance actors from industry organizations and government regulatory agencies. Part 5 makes clear that the adoption of 5G technologies will not mitigate future surveillance risks unless policymakers quickly move to compel telecommunications providers to adopt the security features that are available in 5G standards and equipment. If policymakers do not move swiftly then surveillance actors may continue to prey upon mobile phone users by tracking their physical location. Such a future paints a bleak picture of user privacy and must be avoided.

Public and Privacy Policy Implications of PHAC’s Use of Mobility Information

Last week I appeared before the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics to testify about the public and private policy implications of PHAC’s use of mobility information since March 2020. I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If interested, my oral comments are available to download. What follows in this post is the content of the brief which was submitted.

Introduction

  1. I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. While I submit these comments in a professional capacity they do not necessarily represent the full views of the Citizen Lab.
Continue reading

Canadian Government’s Pandemic Data Collection Reveals Serious Privacy, Transparency, and Accountability Deficits

faceless multiethnic students in masks in subway train with phone
Photo by Keira Burton on Pexels.com

Just before Christmas, Swikar Oli published an article in the National Post that discussed how the Public Health Agency of Canada (PHAC) obtained aggregated and anonymized mobility data for 33 million Canadians. From the story, we learn that the contract was awarded in March to TELUS, and that PHAC used the mobility data to “understand possible links between movement of populations within Canada and spread of COVID-19.”

Around the same time as the article was published, PHAC posted a notice of tender to continue collecting aggregated and anonymized mobility data that is associated with Canadian cellular devices. The contract would remain in place for several years and be used to continue providing mobility-related intelligence to PHAC.

Separate from either of these means of collecting data, PHAC has been also purchasing mobility data “… from companies who specialize in producing anonymized and aggregated mobility data based on location-based services that are embedded into various third-party apps on personal devices.” There has, also, been little discussion of PHAC’s collection and use of data from these kinds of third-parties, which tend to be advertising and data surveillance companies that consumers have no idea are collecting, repackaging, and monetizing their personal information.

There are, at first glance, at least four major issues that arise out of how PHAC has obtained and can use the aggregated and anonymized information to which they have had, and plan to have, access.

Continue reading

The Problems and Complications of Apple Monitoring for Child Sexual Abuse Material in iCloud Photos

pexels-photo-1294886.jpeg
Photo by Mateusz Dach on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of the devices that it sells in a stated intent to expand protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images using the Messages application. The second will monitor for the presence of Child Sexual Abuse Material (CSAM) in iCloud Photos. The third will monitor for searches pertaining to CSAM. These features are planned to be activated in the United States in the next versions of Apple’s operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of iCloud Photos for CSAM content. I begin with a background of Apple’s efforts to monitor for CSAM content on their services before providing a description of the newly announced CSAM surveillance system. I then turn to outline some problems, complications, and concerns with this new child safety feature. In particular, I discuss the challenges facing Apple in finding reputable child safety organizations with whom to partner, the potential ability to region-shift to avoid the surveillance, the prospect of the surveillance system leading to ongoing harms towards CSAM survivors, the likelihood that Apple will expand the content which is subject to the company’s surveillance infrastructure, and the weaponization of the CSAM surveillance infrastructure against journalists, human rights defenders, lawyers, opposition politicians, and political dissidents. I conclude with a broader discussion of the problems associated with Apple’s new CSAM surveillance infrastructure.

A previous post focused on the surveillance children’s messages to monitor for sexually explicit photos. Future posts will address the third child safety feature that Apple has announced, as well as the broader implications of Apple’s child safety initiatives.

Background to Apple Monitoring for CSAM

Apple has previously worked with law enforcement agencies to combat CSAM though the full contours of that assistance are largely hidden from the public. In May 2019, Mac Observer noted that the company had modified their privacy policy to read, “[w]e may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material” (emphasis not in original). Per Forbes, Apple places email messages under surveillance when they are routed through its systems. Mail is scanned and if CSAM content is detected then Apple automatically prevents the email from reaching its recipient and assigns an employee to confirm the CSAM content of the message. If the employee confirms the existence of CSAM content the company subsequently provides subscriber information to the National Center for Missing and Exploited Children (NCMEC) or a relevant government agency.1

Continue reading

Apple’s Monitoring of Children’s Communications Content Puts Children and Adults at Risk

pexels-photo-193004.jpeg
Photo by Torsten Dettlaff on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.

Sexually Explicit Image Surveillance in Messages

Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.

Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.

Continue reading

We Chat, They Watch: How International Users Unwittingly Build up WeChat’s Chinese Censorship Apparatus

(Photo by Maxim Hopman on Unsplash)

Over the past several months I’ve had the distinct honour to work with, and learn from, a number of close colleagues and friends on the topic of surveillance and censorship that takes place on WeChat. We have published a report with the Citizen Lab entitled, “We Chat, They Watch: How International Users Unwittingly Build up WeChat’s Chinese Censorship Apparatus.” The report undertook a mixed methods approach to understand how non-China registered WeChat accounts were subjected to surveillance which was, then, used to develop a censorship list that is applied to users who have registered their accounts in China. Specifically, the report:

  • Presents results from technical experiments which reveal that WeChat communications conducted entirely among non-China-registered accounts are subject to pervasive content surveillance that was previously thought to be exclusively reserved for China-registered accounts.
  • Documents and images transmitted entirely among non-China-registered accounts undergo content surveillance wherein these files are analyzed for content that is politically sensitive in China.
  • Upon analysis, files deemed politically sensitive are used to invisibly train and build up WeChat’s Chinese political censorship system.
  • From public information, it is unclear how Tencent uses non-Chinese-registered users’ data to enable content blocking or which policy rationale permits the sharing of data used for blocking between international and China regions of WeChat.
  • Tencent’s responses to data access requests failed to clarify how data from international users is used to enable political censorship of the platform in China.

You can download the report as a pdf, or read it on the Web in its entirety at the Citizen Lab’s website. There is also a corresponding FAQ to quickly answer questions that you may have about the report.