Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under Socially Beneficial and Legitimate Business Exemptions in Canadian Privacy Law

Earlier this month Amanda Cutinha and I published a report, entitled “Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under Socially Beneficial and Legitimate Business Exemptions in Canadian Privacy Law.” In it, we examine how the Government of Canada obtained and used mobility data over the course of the COVID-19 pandemic, and use that recent history to analyse and critique the Consumer Privacy Protection Act (CPPA).

The report provides a detailed summary of how mobility information was collected as well as a legal analysis of why the collection and use of this information likely conformed with the Privacy Act as well as the Personal Information Protection and Electronic Documents Act (PIPEDA). We use this conformity to highlight a series of latent governance challenges in PIPEDA, namely:

  1. PIPEDA fails to adequately protect the privacy interests at stake with de-identified and aggregated data despite risks that are associated with re-identification.
  2. PIPEDA lacks requirements that individuals be informed of how their data is de-identified or used for secondary purposes.
  3. PIPEDA does not enable individuals or communities to substantively prevent harmful impacts of data sharing with the government.
  4. PIPEDA lacks sufficient checks and balances to ensure that meaningful consent is obtained to collect, use, or disclose de-identified data.
  5. PIPEDA does not account for Indigenous data sovereignty nor does it account for Indigenous sovereignty principles in the United Nations Declaration on the Rights of Indigenous Peoples, which has been adopted by Canada.
  6. PIPEDA generally lacks sufficient enforcement mechanisms.

We leverage these governance challenges to, subsequently, analyse and suggest amendments to the CPPA. Our report’s 19 amendments would affect:

  1. Governance of de-identified data
  2. Enhancing knowledge and consent requirements surrounding the socially beneficial purposes exemption and legitimate interest exemption
  3. Meaningful consent for secondary uses
  4. Indigenous sovereignty
  5. Enforcement mechanisms
  6. Accessibility and corporate transparency

While we frankly believe that the legislation should be withdrawn and re-drafted with human rights as the guide stone of the legislation we also recognise that this is unlikely to happen. As such, our amendments are meant to round off some of the sharp edges of the legislation, though we also recognise that further amendments to other parts of the legislation are likely required.

Ultimately, if the government of Canada is truly serious about ensuring that individuals and communities are involved in developing policies pursuant to themselves and their communities, ameliorating disadvantages faced by marginalized residents of Canada, and committing to reconciliation with Indigenous populations, it will commit to serious amendments of C-27 and the CPPA. Our recommendations are made in the spirit of addressing the gaps in this new legislation that are laid bare when assessing how it intersects with Health Canada’s historical use of locational information. They are, however, only a start toward the necessary amendments for this legislation.

Executive Summary

The Government of Canada obtained de-identified and aggregated mobility data from private companies for the socially beneficial purpose of trying to understand and combat the spread of COVID-19. This collection began as early as March 2020, and the information was provided by Telus and BlueDot. It wasn’t until December 2021, after the government issued a request for proposals for cellular tower information that would extend the collection of mobility information, that the public became widely aware of the practice. Parliamentary meetings into the government’s collection of mobility data began shortly thereafter, and a key finding was that Canada’s existing privacy legislation is largely ineffective in managing the collection, use, and disclosure of data in a manner that recognizes the privacy rights of individuals. In spite of this finding, the federal government introduced Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts in June 2022 which, if passed into law, will fail to correct existing deficiencies in Canada’s federal commercial privacy law. In particular, Bill C-27 would make explicit that the government can continue collecting information, including mobility data from private organizations, so long as uses were socially beneficial and without clearly demarcating what will or will not constitute such uses in the future.

This report, “Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under the Socially Beneficial and Legitimate Interest Exemptions in Canadian Privacy Law,” critically assesses the government’s existing practice of collecting mobility information for socially beneficial purposes as well as private organizations’ ability to collect and use personal information without first obtaining consent from individuals or providing them with knowledge of the commercial activities. It uses examples raised during the COVID-19 pandemic to propose 19 legislative amendments to Bill C-27. These amendments would enhance corporate and government accountability for the collection, use, and disclosure of information about Canadian residents and communities, including for so-called de-identified information.

Part 1 provides a background of key privacy issues that were linked to collecting mobility data during the COVID-19 pandemic. We pay specific attention to the implementation of new technologies to collect, use, and disclose data, such as those used for contact-tracing applications and those that foreign governments used to collect mobility information from telecommunications carriers. We also attend to the concerns that are linked to collecting location information and why there is a consequent need to develop robust governance frameworks.

Part 2 focuses on the collection of mobility data in Canada. It outlines what is presently known about how Telus and BlueDot collected the mobility information that was subsequently disclosed to the government in aggregated and de-identified formats, and it discusses the key concerns raised in meetings held by the Standing Committee on Access to Information, Privacy and Ethics. The Committee’s meetings and final report make clear that there was an absence of appropriate public communication from the federal government about its collection of mobility information as well as a failure to meaningfully consult with the Office of the Privacy Commissioner of Canada. The Government of Canada also failed to verify that Telus and BlueDot had obtained meaningful consent prior to receiving data that was used to generate insights into Canadian residents’ activities during the pandemic.

Part 3 explores the lawfulness of the collection of mobility data by BlueDot and Telus and the disclosure of the data to the Public Health Agency of Canada under existing federal privacy law. Overall, we find that BlueDot and Telus likely complied with current privacy legislation. The assessment of the lawfulness of BlueDot and Telus’ activities serves to reveal deficiencies in Canada’s two pieces of federal privacy legislation, the Privacy Actand the Personal Information Protection and Electronic Documents Act (PIPEDA).

In Part 4, we identify six thematic deficiencies in Canada’s commercial privacy legislation:

  1. PIPEDA fails to adequately protect the privacy interests at stake with de-identified and aggregated data despite risks that are associated with re-identification.
  2. PIPEDA lacks requirements that individuals be informed of how their data is de-identified or used for secondary purposes.
  3. PIPEDA does not enable individuals or communities to substantively prevent harmful impacts of data sharing with the government.
  4. PIPEDA lacks sufficient checks and balances to ensure that meaningful consent is obtained to collect, use, or disclose de-identified data.
  5. PIPEDA does not account for Indigenous data sovereignty nor does it account for Indigenous sovereignty principles in the United Nations Declaration on the Rights of Indigenous Peoples, which has been adopted by Canada.
  6. PIPEDA generally lacks sufficient enforcement mechanisms.

The Government of Canada has introduced the Consumer Privacy Protection Act (CPPA) in Bill C-27 to replace PIPEDA. Part 5 demonstrates that Bill C-27 does not adequately ameliorate the deficiencies of PIPEDA as discussed in Part 4. Throughout, Part 5 offers corrective recommendations to the Consumer Privacy Protection Act that would alleviate many of the thematic issues facing PIPEDA and, by extension, the CPPA.

The federal government and private organizations envision the Consumer Privacy Protection Act as permitting private individuals’ and communities’ data to be exploited for the benefit of the economy and society alike. The legislation includes exceptions to consent and sometimes waives the protections that would normally be associated with de-identified data, where such exemptions could advance socially beneficial purposes or legitimate business interests. While neither the government nor private business necessarily intend to use de-identified information to injure, endanger, or negatively affect the persons and communities from whom the data is obtained, the breadth of potential socially beneficial purposes means that future governments will have a wide ambit to define the conceptual and practical meaning of these purposes. Some governments, as an example, might analyze de-identified data to assess how far people must travel to obtain abortion-care services and, subsequently, recognize that more services are required. Other governments could use the same de-identified mobility data and come to the opposite conclusion and selectively adopt policies to impair access to such services. This is but one of many examples. There are similar, though not identical, dangers that may arise should private organizations be able to collect or use an individual’s personal information without their consent under the legitimate interest exemption in the CPPA. Specifically, this exemption would let private organizations determine whether the collection or use of personal information outweighs the adverse effects of doing so, with the individuals and communities affected being left unaware of how personal information was collected or used, and thus unable to oppose collections or uses with which they disagree.

Parliamentary committees, the Office of the Privacy Commissioner of Canada, Canadian academics, and civil society organizations have all called for the federal government to amend federal privacy legislation. As presently drafted, however, the Consumer Privacy Protection Act would reaffirm existing deficiencies that exist in Canadian law while opening the door to expanded data collection, use, and disclosure by private organizations to the federal government without sufficient accountability or transparency safeguards while, simultaneously, empowering private organizations to collect and use personal information without prior consent or knowledge. Such safeguards must be added in legislative amendments or Canada’s new privacy legislation will continue the trend of inadequately protecting individuals and communities from the adverse effects of using de-identified data to advance so-called socially beneficial purposes or using personal information for ostensibly legitimate business purposes.

Public and Privacy Policy Implications of PHAC’s Use of Mobility Information

Last week I appeared before the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics to testify about the public and private policy implications of PHAC’s use of mobility information since March 2020. I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If interested, my oral comments are available to download. What follows in this post is the content of the brief which was submitted.

Introduction

  1. I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. While I submit these comments in a professional capacity they do not necessarily represent the full views of the Citizen Lab.
Continue reading

Canadian Government’s Pandemic Data Collection Reveals Serious Privacy, Transparency, and Accountability Deficits

faceless multiethnic students in masks in subway train with phone
Photo by Keira Burton on Pexels.com

Just before Christmas, Swikar Oli published an article in the National Post that discussed how the Public Health Agency of Canada (PHAC) obtained aggregated and anonymized mobility data for 33 million Canadians. From the story, we learn that the contract was awarded in March to TELUS, and that PHAC used the mobility data to “understand possible links between movement of populations within Canada and spread of COVID-19.”

Around the same time as the article was published, PHAC posted a notice of tender to continue collecting aggregated and anonymized mobility data that is associated with Canadian cellular devices. The contract would remain in place for several years and be used to continue providing mobility-related intelligence to PHAC.

Separate from either of these means of collecting data, PHAC has been also purchasing mobility data “… from companies who specialize in producing anonymized and aggregated mobility data based on location-based services that are embedded into various third-party apps on personal devices.” There has, also, been little discussion of PHAC’s collection and use of data from these kinds of third-parties, which tend to be advertising and data surveillance companies that consumers have no idea are collecting, repackaging, and monetizing their personal information.

There are, at first glance, at least four major issues that arise out of how PHAC has obtained and can use the aggregated and anonymized information to which they have had, and plan to have, access.

Continue reading

Answers and Further Analysis Concerning NSIRA’s 2021 Cyber Incident

questions answers signage
Photo by Pixabay on Pexels.com

The National Security Intelligence Review Agency (NSIRA) is responsible for conducting national security reviews of Canadian federal agencies. On April 16, 2021, the Agency announced that it had suffered a ‘cyber incident’. An unauthorized party had accessed the Agency’s unclassified external network as part of that incident. The affected network did not contain Secret, Top Secret, or Top Secret SI information. In August 2021, NSIRA posted an update with additional details about the cyber incident that it had experienced.

I raised a number of questions about the nature of the Agency’s incident, and its implications, in a post I published earlier in 2021. In this post, I provide an update as well as some further analysis of the incident based on the information that NSIRA revealed in August 2021.

I begin by outlining the additional details that NSIRA has provided about the incident and juxtapose that information with what has been provided by the Canadian Centre for Cyber Security (CCCS) about the Microsoft Exchange vulnerability that led to NSIRA’s incident. I note that NSIRA (or the team(s) responsible for securing its networks) seems to have failed to either patch NSIRA’s on-premises Exchange server when the vulnerability was first announced, or they were unable to successfully implement mitigation measures intended to prevent the exploitation of the server. The result was employee information was obtained by an unauthorized party.

Next, I note the extent to which NSIRA’s update responds to the initial questions I raised when writing about this incident in April 2021. On the whole, most of the questions I raised have been answered to at least some extent.

I conclude by discussing the significance of the information that was exfiltrated from NSIRA, the likelihood that a nation-state actor either conducted the operation or now has access to the exfiltrated data, what this incident may suggest for NSIRA’s IT security, and finally raise questions about NSIRA’s decommissioning of its Protected networks.

Continue reading

The Problems and Complications of Apple Monitoring for Child Sexual Abuse Material in iCloud Photos

pexels-photo-1294886.jpeg
Photo by Mateusz Dach on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of the devices that it sells in a stated intent to expand protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images using the Messages application. The second will monitor for the presence of Child Sexual Abuse Material (CSAM) in iCloud Photos. The third will monitor for searches pertaining to CSAM. These features are planned to be activated in the United States in the next versions of Apple’s operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of iCloud Photos for CSAM content. I begin with a background of Apple’s efforts to monitor for CSAM content on their services before providing a description of the newly announced CSAM surveillance system. I then turn to outline some problems, complications, and concerns with this new child safety feature. In particular, I discuss the challenges facing Apple in finding reputable child safety organizations with whom to partner, the potential ability to region-shift to avoid the surveillance, the prospect of the surveillance system leading to ongoing harms towards CSAM survivors, the likelihood that Apple will expand the content which is subject to the company’s surveillance infrastructure, and the weaponization of the CSAM surveillance infrastructure against journalists, human rights defenders, lawyers, opposition politicians, and political dissidents. I conclude with a broader discussion of the problems associated with Apple’s new CSAM surveillance infrastructure.

A previous post focused on the surveillance children’s messages to monitor for sexually explicit photos. Future posts will address the third child safety feature that Apple has announced, as well as the broader implications of Apple’s child safety initiatives.

Background to Apple Monitoring for CSAM

Apple has previously worked with law enforcement agencies to combat CSAM though the full contours of that assistance are largely hidden from the public. In May 2019, Mac Observer noted that the company had modified their privacy policy to read, “[w]e may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material” (emphasis not in original). Per Forbes, Apple places email messages under surveillance when they are routed through its systems. Mail is scanned and if CSAM content is detected then Apple automatically prevents the email from reaching its recipient and assigns an employee to confirm the CSAM content of the message. If the employee confirms the existence of CSAM content the company subsequently provides subscriber information to the National Center for Missing and Exploited Children (NCMEC) or a relevant government agency.1

Continue reading

Apple’s Monitoring of Children’s Communications Content Puts Children and Adults at Risk

pexels-photo-193004.jpeg
Photo by Torsten Dettlaff on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.

Sexually Explicit Image Surveillance in Messages

Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.

Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.

Continue reading