Finding You: The Network Effect of Telecommunications Vulnerabilities for Location Disclosure

Last week, I published a report with Gary Miller and the Citizen Lab entitled, “Finding You: The Network Effect of Telecommunications Vulnerabilities for Location Disclosure.” I undertook this research while still employed by the Citizen Lab and was delighted to see it available to the public. In it, we discuss how the configuration and vulnerabilities of contemporary telecommunications networks enables surveillance actors to surreptitiously monitor the location of mobile phone users.

The report provides a high-level overview of the geolocation-related threats associated with contemporary networks that depend on the protocols used by 3G, 4G, and 5G network operators, followed by evidence of the proliferation of these threats. Part 1 provides the historical context of unauthorized location disclosures in mobile networks and the importance of the target identifiers used by surveillance actors. Part 2 explains how mobile networks are made vulnerable by signaling protocols used for international roaming, and how networks are made available to surveillance actors to carry out attacks. An overview of the mobile ecosystem lays the foundation for the technical details of domestic versus international network surveillance, while the vectors of active versus passive surveillance techniques with evidence of attacks shows how location information is presented to the actor. Part 3 provides details of a case study from a media report that shows evidence of widespread state-sponsored surveillance, followed by threat intelligence data revealing network sources attributed to attacks detected in 2023. These case studies underscore the significance and relevance of undertaking these kinds of surveillance operations.

Deficiencies in oversight and accountability of network security are discussed in Part 4. This includes outlining the incentives and enablers that are provided to surveillance actors from industry organizations and government regulatory agencies. Part 5 makes clear that the adoption of 5G technologies will not mitigate future surveillance risks unless policymakers quickly move to compel telecommunications providers to adopt the security features that are available in 5G standards and equipment. If policymakers do not move swiftly then surveillance actors may continue to prey upon mobile phone users by tracking their physical location. Such a future paints a bleak picture of user privacy and must be avoided.

The G7 Communique and Artificial Intelligence

The G7 Communique which was issued on May 20 included discussions of AI technology and governance. While comments are high-level they are worth paying attention to since they may indicate where ongoing strategic pressure will be placed when developing AI policies.

The G7’s end goals around AI are to ensure that trustworthy AI is developed that is aligned with democratic values. The specific values called out include:

  • fairness;
  • accountability;
  • transparency;
  • safety;
  • protection from online harassment, hate, and abuse; and
  • respect for privacy and human rights, fundamental freedoms, and the protection of personal data.

While not surprising, the core values stated do underscore the role for privacy regulators and advocates in the development of AI governance policies and practices.

Three other highlights include:

  1. The need to work with private parties to promote responsible AI, with the caveat that platforms are singled out for the needing to address child sexual exploitation and abuse while upholding the children’s rights to safety and privacy online.
  2. A strong emphasis on developing interoperable international governance and technical standards to promote responsible AI governance and technologies.
  3. A commitment by the G7, in collaboration with the OECD and GPAI, to launch discussions on generative AI technologies by end of the year.

The first point, concerning child sexual exploitation, either suggests a new front on the discussions of technology policy and online child abuse images or is just another reference to ongoing pressure on large internet platforms. Only time will tell us how to interpret this aspect of the G7’s messaging. Monitoring other Five Eyes meetings and G7 outputs maybe help with this interpretation.

The second point, on international governance, raises the question of whether federal governments will link national regulations to international standards. Should that occur then it will be interesting to see the extent to which regulations in Canada’s Artificial Intelligence and Data Act ultimately refer to, or integrate, such standards. Assuming, of course, that that the Act is passed into law in its present format.

The third point underscores how generative AI technologies are attracting attention on prominent and important national and international agendas. It remains to be seen, however, whether such attention persists and, also, whether we see ongoing and significant concerns continue to percolate as the public and politicians become used to the technology and it’s increasing integration with failing computing functions. For my money I don’t see emerging uses of AI systems to fall off the agenda anytime in the near future.

If you’re curious in assessing the AI-related aspects of the Communique yourself, you can find them in the Preamble at 1, as well as in Digital at 38

Relaunch of the SIGINT Summaries

Photo by Brett Sayles on Pexels.com

In 2013, journalists began revealing secrets associated with members of the Five Eyes (FVEY) intelligence alliance. These secrets were disclosed by Edward Snowden, a US intelligence contractor. The journalists who published about the documents did so after carefully assessing their content and removing information that was identified as unduly injurious to national security interests or that threatened to reveal individuals’ identities.

During my tenure at the Citizen Lab I provided expert advice to journalists about the newsworthiness of different documents and, also, when content should be redacted as its release was not in the public interest. In some cases documents that were incredibly interesting were never published on the basis that doing so would be injurious to national security, notwithstanding the potential newsworthiness of the documents in question. As an element of my work, I identified and summarized published documents and covernames which were associated with Canada’s signals intelligence agency, the Communications Security Establishment (CSE).

I am happy to announce a re-launching of the SIGINT summaries but with far more content. Content, today, includes:

In all cases the materials which are summarised on my website have been published, in open-source, by professional news organizations or other publishers. None of the material that I summarise or host is new and none of it has been leaked or provided to me by government or non-government bodies. No current or former intelligence officer has provided me with details about any of the covernames or underlying documents. This said, researchers associated with the Citizen Lab and other academic institutions have, in the past, contributed to some of the materials published on this website.

As a caveat, all descriptions of what the covernames mean or refer to, and what are contained in individual documents leaked by Edward Snowden, are provided on a best-effort basis. Entries will be updated periodically as time is available to analyse further documents or materials.

How Were Documents Summarized?

In assessing any document I have undertaken the following steps:

  1. Re-created my template for all Snowden documents, which includes information about the title, metadata associated with the document (e.g., when it was made public and in what news story, when it was created, which agency created it), and a listing of the covernames listed in the document.
  2. When searching documents for covernames, I moved slowly through the document and, often, zoomed into charts, figures, or other materials in order to decipher both covernames which are prominent in the given document as well as covernames in much smaller fonts. The result of this is that in some cases my analyses of documents have indicated more covernames being present than in other public repositories which have relied on OCR-based methods to extract covernames from texts.
  3. I read carefully through the text of the document, sometimes several times, to try and provide a summary of the highlights in a given document. Note that this is based on my own background and, as such, it is possible that the summaries which are generated may miss items that other readers find notable or interesting. These summaries try and avoid editorialising to the best of my ability.
  4. In a separate file, I have a listing of the given agency’s covernames. Using the listed covernames in the summary, I worked through the document in question to assess what, if anything, was said about a covername and whether what was said is new or expanded my understanding of a covername. Where it did, I added additional sentences to the covername in the listing of the relevant agency’s covernames along with a page reference to source the new information. The intent, here, was to both develop a kind of partial covername decoder and, also, to enable other experts to assess how I have reached conclusions about what covernames mean. This enables them to more easily assess the covername descriptions I have provided.
  5. There is sometimes an editorial process which involved rough third-party copyediting and expert peer review. Both of these, however, have been reliant on external parties having the time and expertise to provide these services. While many of the summaries and covername listings have been copyedited or reviewed, this is not the case for all of them.
  6. Finally, the new entries have been published on this website.

Also, as part of my assessment process I have normalized the names of documents. This has meant I’ve often re-named original documents and, in some cases, split conjoined documents which were published by news organizations into individual documents (e.g., a news organization may have published a series of documents linked to AURORAGOLD as a single .pdf instead of publishing each document or slide deck as its own .pdf). The result is that some of the materials which are published on this website may appear new—it may seem as though there are no other sources on the Internet that appear to host a given document—but, in fact, these are just smaller parts of larger conjoined .pdfs.

Commonly Asked Questions

Why isn’t XXX document included in your list of summarised documents? It’s one of the important ones!

There are a lot of documents to work through and, to some extent, my review of them has been motivated either by specific projects or based on a listing of documents that I have time to assess over the past many years. Documents have not been processed based on when they were published. It can take anywhere from 10 minutes to 5 hours or more to process a given document, and at times I have chosen to focus on documents based on the time available to me or by research projects I have undertaken.

Why haven’t you talked about the legal or ethical dimensions of these documents?

There are any number of venues where I have professionally discussed the activities which have been carried out by, and continue to be carried out by, Western signals intelligence agencies. The purpose of these summaries is to provide a maximally unbiased explanation of what is actually in the documents, instead of injecting my own views of what they describe.

A core problem in discussing the Snowden documents is a blurring of what the documents actually say versus what people think they say, and the appropriateness or legality of what is described in them. This project is an effort to provide a more robust foundation to understand the documents, themselves, and then from there other scholars and experts may have more robust assessments of their content.

Aren’t you endangering national security by publishing this material?

No, I don’t believe that I am. Documents which I summarise and the covernames which I summarise have been public for many, many years. These are, functionally, now historical texts.

Any professional intelligence service worth its salt will have already mined all of these documents and performed an equivalent level of analysis some time ago. Scholars, the public, and other experts however have not had the same resources to similarly analyse and derive value from the documents. In the spirit of open scholarship I am sharing these summaries. I also hope that it is helpful for policymakers so that they can better assess and understand the historical capabilities of some of the most influential and powerful signals intelligence agencies in the world.

Finally, all of the documents, and covernames, which are summarised have been public for a considerable period of time. Programs will have since been further developed or been terminated, and covernames rotated.

What is the narrative across the documents and covernames?

I regard the content published here as a kind of repository that can help the public and researchers undertake their own processes of discovery, based on their own interests. Are you interested in how the FVEY agencies have assessed VPNs, encryption, smartphones, or other topics? Then you could do a search on agencies’ summary lists or covernames to find content of interest. More broadly, however, I think that there is a substantial amount of material which has been synthesised by journalists or academics; these summaries can be helpful to assess their accuracy in discussing the underlying material and, in most cases, the summaries of particular documents link to journalistic reporting that tries to provide a broader narrative to sets of documents.

Why haven’t you made this easier to understand?

I am aware that some of the material is still challenging to read. This was the case for me when I started reading the Snowden documents, and actually led to several revisions of reading/revising summaries as I and colleagues developed a deeper understanding for what the documents were trying to communicate.

To some extent, reading the Snowden documents parallels learning a novel language. As such, it is frustrating to engage with at first but, over time, you can develop an understanding of the structure and grammar of the language. The same is true as you read more of the summaries, underlying documents, and covername descriptions. My intent is that with the material assembled on this website the time to become fluent will be massively reduced.

Future Plans

Over time I hope to continue to add to the summaries, though this will continue as a personal historical project. As such, updates will be made only as I have time available to commit to the work.


  1. As of writing, no reviewed Snowden document explicitly discloses an ASD covername. ↩︎

Why Is(n’t) TikTok A National Security Risk?

This image has an empty alt attribute; its file name is pexels-photo-8360440.jpeg
Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, some commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security.
  2. Open source information means information which you or I can find, and read, without requiring a security clearance.

Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under Socially Beneficial and Legitimate Business Exemptions in Canadian Privacy Law

Earlier this month Amanda Cutinha and I published a report, entitled “Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under Socially Beneficial and Legitimate Business Exemptions in Canadian Privacy Law.” In it, we examine how the Government of Canada obtained and used mobility data over the course of the COVID-19 pandemic, and use that recent history to analyse and critique the Consumer Privacy Protection Act (CPPA).

The report provides a detailed summary of how mobility information was collected as well as a legal analysis of why the collection and use of this information likely conformed with the Privacy Act as well as the Personal Information Protection and Electronic Documents Act (PIPEDA). We use this conformity to highlight a series of latent governance challenges in PIPEDA, namely:

  1. PIPEDA fails to adequately protect the privacy interests at stake with de-identified and aggregated data despite risks that are associated with re-identification.
  2. PIPEDA lacks requirements that individuals be informed of how their data is de-identified or used for secondary purposes.
  3. PIPEDA does not enable individuals or communities to substantively prevent harmful impacts of data sharing with the government.
  4. PIPEDA lacks sufficient checks and balances to ensure that meaningful consent is obtained to collect, use, or disclose de-identified data.
  5. PIPEDA does not account for Indigenous data sovereignty nor does it account for Indigenous sovereignty principles in the United Nations Declaration on the Rights of Indigenous Peoples, which has been adopted by Canada.
  6. PIPEDA generally lacks sufficient enforcement mechanisms.

We leverage these governance challenges to, subsequently, analyse and suggest amendments to the CPPA. Our report’s 19 amendments would affect:

  1. Governance of de-identified data
  2. Enhancing knowledge and consent requirements surrounding the socially beneficial purposes exemption and legitimate interest exemption
  3. Meaningful consent for secondary uses
  4. Indigenous sovereignty
  5. Enforcement mechanisms
  6. Accessibility and corporate transparency

While we frankly believe that the legislation should be withdrawn and re-drafted with human rights as the guide stone of the legislation we also recognise that this is unlikely to happen. As such, our amendments are meant to round off some of the sharp edges of the legislation, though we also recognise that further amendments to other parts of the legislation are likely required.

Ultimately, if the government of Canada is truly serious about ensuring that individuals and communities are involved in developing policies pursuant to themselves and their communities, ameliorating disadvantages faced by marginalized residents of Canada, and committing to reconciliation with Indigenous populations, it will commit to serious amendments of C-27 and the CPPA. Our recommendations are made in the spirit of addressing the gaps in this new legislation that are laid bare when assessing how it intersects with Health Canada’s historical use of locational information. They are, however, only a start toward the necessary amendments for this legislation.

Executive Summary

The Government of Canada obtained de-identified and aggregated mobility data from private companies for the socially beneficial purpose of trying to understand and combat the spread of COVID-19. This collection began as early as March 2020, and the information was provided by Telus and BlueDot. It wasn’t until December 2021, after the government issued a request for proposals for cellular tower information that would extend the collection of mobility information, that the public became widely aware of the practice. Parliamentary meetings into the government’s collection of mobility data began shortly thereafter, and a key finding was that Canada’s existing privacy legislation is largely ineffective in managing the collection, use, and disclosure of data in a manner that recognizes the privacy rights of individuals. In spite of this finding, the federal government introduced Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts in June 2022 which, if passed into law, will fail to correct existing deficiencies in Canada’s federal commercial privacy law. In particular, Bill C-27 would make explicit that the government can continue collecting information, including mobility data from private organizations, so long as uses were socially beneficial and without clearly demarcating what will or will not constitute such uses in the future.

This report, “Minding Your Business: A Critical Analysis of the Collection of De-identified Mobility Data and Its Use Under the Socially Beneficial and Legitimate Interest Exemptions in Canadian Privacy Law,” critically assesses the government’s existing practice of collecting mobility information for socially beneficial purposes as well as private organizations’ ability to collect and use personal information without first obtaining consent from individuals or providing them with knowledge of the commercial activities. It uses examples raised during the COVID-19 pandemic to propose 19 legislative amendments to Bill C-27. These amendments would enhance corporate and government accountability for the collection, use, and disclosure of information about Canadian residents and communities, including for so-called de-identified information.

Part 1 provides a background of key privacy issues that were linked to collecting mobility data during the COVID-19 pandemic. We pay specific attention to the implementation of new technologies to collect, use, and disclose data, such as those used for contact-tracing applications and those that foreign governments used to collect mobility information from telecommunications carriers. We also attend to the concerns that are linked to collecting location information and why there is a consequent need to develop robust governance frameworks.

Part 2 focuses on the collection of mobility data in Canada. It outlines what is presently known about how Telus and BlueDot collected the mobility information that was subsequently disclosed to the government in aggregated and de-identified formats, and it discusses the key concerns raised in meetings held by the Standing Committee on Access to Information, Privacy and Ethics. The Committee’s meetings and final report make clear that there was an absence of appropriate public communication from the federal government about its collection of mobility information as well as a failure to meaningfully consult with the Office of the Privacy Commissioner of Canada. The Government of Canada also failed to verify that Telus and BlueDot had obtained meaningful consent prior to receiving data that was used to generate insights into Canadian residents’ activities during the pandemic.

Part 3 explores the lawfulness of the collection of mobility data by BlueDot and Telus and the disclosure of the data to the Public Health Agency of Canada under existing federal privacy law. Overall, we find that BlueDot and Telus likely complied with current privacy legislation. The assessment of the lawfulness of BlueDot and Telus’ activities serves to reveal deficiencies in Canada’s two pieces of federal privacy legislation, the Privacy Actand the Personal Information Protection and Electronic Documents Act (PIPEDA).

In Part 4, we identify six thematic deficiencies in Canada’s commercial privacy legislation:

  1. PIPEDA fails to adequately protect the privacy interests at stake with de-identified and aggregated data despite risks that are associated with re-identification.
  2. PIPEDA lacks requirements that individuals be informed of how their data is de-identified or used for secondary purposes.
  3. PIPEDA does not enable individuals or communities to substantively prevent harmful impacts of data sharing with the government.
  4. PIPEDA lacks sufficient checks and balances to ensure that meaningful consent is obtained to collect, use, or disclose de-identified data.
  5. PIPEDA does not account for Indigenous data sovereignty nor does it account for Indigenous sovereignty principles in the United Nations Declaration on the Rights of Indigenous Peoples, which has been adopted by Canada.
  6. PIPEDA generally lacks sufficient enforcement mechanisms.

The Government of Canada has introduced the Consumer Privacy Protection Act (CPPA) in Bill C-27 to replace PIPEDA. Part 5 demonstrates that Bill C-27 does not adequately ameliorate the deficiencies of PIPEDA as discussed in Part 4. Throughout, Part 5 offers corrective recommendations to the Consumer Privacy Protection Act that would alleviate many of the thematic issues facing PIPEDA and, by extension, the CPPA.

The federal government and private organizations envision the Consumer Privacy Protection Act as permitting private individuals’ and communities’ data to be exploited for the benefit of the economy and society alike. The legislation includes exceptions to consent and sometimes waives the protections that would normally be associated with de-identified data, where such exemptions could advance socially beneficial purposes or legitimate business interests. While neither the government nor private business necessarily intend to use de-identified information to injure, endanger, or negatively affect the persons and communities from whom the data is obtained, the breadth of potential socially beneficial purposes means that future governments will have a wide ambit to define the conceptual and practical meaning of these purposes. Some governments, as an example, might analyze de-identified data to assess how far people must travel to obtain abortion-care services and, subsequently, recognize that more services are required. Other governments could use the same de-identified mobility data and come to the opposite conclusion and selectively adopt policies to impair access to such services. This is but one of many examples. There are similar, though not identical, dangers that may arise should private organizations be able to collect or use an individual’s personal information without their consent under the legitimate interest exemption in the CPPA. Specifically, this exemption would let private organizations determine whether the collection or use of personal information outweighs the adverse effects of doing so, with the individuals and communities affected being left unaware of how personal information was collected or used, and thus unable to oppose collections or uses with which they disagree.

Parliamentary committees, the Office of the Privacy Commissioner of Canada, Canadian academics, and civil society organizations have all called for the federal government to amend federal privacy legislation. As presently drafted, however, the Consumer Privacy Protection Act would reaffirm existing deficiencies that exist in Canadian law while opening the door to expanded data collection, use, and disclosure by private organizations to the federal government without sufficient accountability or transparency safeguards while, simultaneously, empowering private organizations to collect and use personal information without prior consent or knowledge. Such safeguards must be added in legislative amendments or Canada’s new privacy legislation will continue the trend of inadequately protecting individuals and communities from the adverse effects of using de-identified data to advance so-called socially beneficial purposes or using personal information for ostensibly legitimate business purposes.

Cybersecurity Will Not Thrive in Darkness: A Critical Analysis of Proposed Amendments in Bill C-26 to the Telecommunications Act

Last month I published a report, “Cybersecurity Will Not Thrive in Darkness: A Critical Analysis of Proposed Amendments in Bill C-26 to the Telecommunications Act.” The report undertakes a critical analysis of Bill C-26 which would empower the government to compel critical infrastructure companies to undertake (or refrain from taking) activities the government was of the opinion would enhance the security of Canada’ critical infrastructure. The report begins by offering a background to why this legislation is seen as necessary by the government and, then, proceeds to assess the elements of the legislation which would modify the Telecommunications Act. Specifically, it focuses on issues associated with:

  • Compelling or directing modifications to organizations’ technical or business activities
  • Secrecy and absence of transparency or accountability provisions
  • Deficient judicial review processes
  • Extensive information sharing within and beyond Canadian agencies
  • Costs associated with security compliance
  • Vague drafting language

30 different recommendations are offered that, if adopted, would leave the government able to compel telecommunications companies to modify their practices while, simultaneously, imbuing the legislation with additional nuance, restraint, and accountability provisions. As drafted, today, the legislation prioritises secrecy at the expense of democratic accountability and would establish law that empowered actions which were unpredictable to private organizations and residents of Canada alike. The effect would be to empower the government to undertake lawful, if democratically illegible, activities. Cybersecurity requires a high degree of transparency and dialogue to be successfully implemented. Security can be and must be aligned with Canada’s democratic principles. It is now up to the government to amend its legislation in accordance with them.

Executive Summary

On June 14, 2022, the Government of Canada introduced “Bill C-26: An Act respecting cyber security, amending the Telecommunications Act and making consequential amendments to other Acts.” If passed into law, it will significantly reform the Telecommunications Act as well as impose new requirements on federally regulated critical infrastructure providers. This report, “Cybersecurity Will Not Thrive in Darkness: A Critical Analysis of Proposed Amendments in Bill C-26 to the Telecommunications Act,” offers 30 recommendations to the draft legislation in an effort to correct its secrecy and accountability deficiencies, while suggesting amendments that would impose some restrictions on the range of powers that the government would be able to wield. These amendments must be seriously taken up because of the sweeping nature of the legislation.

As drafted at time of writing, Bill C-26 would empower the Minister of Industry to compel telecommunications providers to do or refrain from doing anything in the service of securing Canadian telecommunications networks against the threats of interference, manipulation, or disruption. The legislation would authorize the Minister to compel providers to disclose confidential information and then enable the Minister to circulate it widely within the federal government; this information could potentially include either identifiable or de-identified personal information. Moreover, the Minister could share non-confidential information internationally even when doing so could result in regulatory processes or private right of actions against an individual or organization. Should the Minister or other party to whom the Minister shares information unintentionally lose control of the information, there would be no liability attached to the government for the accident.

Where orders or regulations are issued, they would not need to be published in the Canadian Gazette and gags could be attached to the recipients of such orders. There may even be situations where the government could issue an order or regulation, with the aforementioned publication ban and gag, that runs counter to a decision by the Canadian Radio-television and Telecommunications Commission (CRTC) and that overrides aspects of that decision. And in any cases where a telecommunications provider seeks judicial review, it might never see the evidence used to justify an order or regulation. However, if a telecommunications provider is found to have deliberately ignored or failed to adhere to an order, then either the individuals who directed the action or the telecommunications provider could suffer administrative monetary penalties.

This report, in summary, identifies and analyzes a series of deficiencies in Bill C-26 as it is presently drafted:

  • The breadth of what the government might order a telecommunications provider to do is not sufficiently bounded.
  • The excessive secrecy and confidentiality provisions imposed on telecommunications providers threaten to establish a class of secret law and regulations.
  • Significant potential exists for excessive information sharing within the federal government as well as with international partners.
  • Costs associated with compliance with reforms may endanger the viability of smaller providers.
  • Vague drafting language means that the full contours of the legislation cannot be assessed.
  • No recognition of privacy or other Charter-protected rights exists as a counterbalance to proposed security requirements nor are appropriate accountability or transparency requirements imposed on the government.
  • Even if it is presumed that the government does need the ability to encourage or compel telecommunications providers to modify their technical or business operations to enhance the security of their services and facilities, it is readily apparent that more transparency and accountability should be required of the government. All of the recommendations in this report are meant to address some of the existent problems in the legislation.

Should these recommendations or ones derived from them not be taken up, then the government will be creating legislation of the worst kind insofar as it will require the public—and telecommunications providers—to simply trust that the government knows what it is doing, is reaching the right decisions, and that no need exists for a broader public discussion concerning the kinds of protections that should be put in place to protect the cybersecurity of Canada’s telecommunications networks. Cybersecurity cannot thrive on secretive and shadowy government edicts. The government must amend its legislation to ensure its activities comport with Canada’s democratic values and the norms of transparency and accountability.