Why Is(n’t) TikTok A National Security Risk?

This image has an empty alt attribute; its file name is pexels-photo-8360440.jpeg
Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, some commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.

  1. I have previously talked about the broader risks of correlating national security and information security.
  2. Open source information means information which you or I can find, and read, without requiring a security clearance.

Building Trust in Chinese Infrastructure Vendors and Communications Intermediaries

Last week I appeared before the Special Committee on Canada-Chinese Relations to testify about the security challenges posed by Chinese infrastructure vendors and communications intermediaries. . I provided oral comments to the committee which were, substantially, a truncated version of the brief I submitted. If so interested, my oral comments are available to download, and what follows in this post is the actual brief which was submitted.


  1. I am a senior research associate at the Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto. My research explores the intersection of law, policy, and technology, and focuses on issues of national security, data security, and data privacy. I submit these comments in a professional capacity representing my views and those of the Citizen Lab.


  1. Successive international efforts to globalize trade and supply chains have led to many products being designed, developed, manufactured, or shipped through China. This has, in part, meant that Chinese companies are regularly involved in the creation and distribution of products that are used in the daily lives of billions of people around the world, including products that are integrated into Canadians’ personal lives and the critical infrastructures on which they depend. The Chinese government’s increasing assertiveness on the international stage and its belligerent behaviours, in tandem with opaque national security laws, have led to questioning in many Western countries of the extent to which products which come from China can be trusted. In particular, two questions are regularly raised: might supply chains be used as diplomatic or trade leverage or, alternately, will products produced in, transited through, or operated from China be used to facilitate government intelligence, attack, or influence operations?
  2. For decades there have been constant concerns about managing technology products’ supply chains.[1] In recent years, they have focused on telecommunications equipment, such as that produced by ZTE and Huawei,[2] as well as the ways that social media platforms such as WeChat or TikTok could be surreptitiously used to advance the Chinese government’s interests. As a result of these concerns some of Canada’s allies have formally or informally blocked Chinese telecommunications vendors’ equipment from critical infrastructure. In the United States, military personnel are restricted in which mobile devices they can buy on base and they are advised to not use applications like TikTok, and the Trump administration aggressively sought to modify the terms under which Chinese social media platforms were available in the United States marketplace.
  3. Legislators and some security professionals have worried that ZTE or Huawei products might be deliberately modified to facilitate Chinese intelligence or attack operations, or be drawn into bilateral negotiations or conflicts that could arise with the Chinese government. Further, social media platforms might be used to facilitate surveillance of international users of the applications, or the platforms’ algorithms could be configured to censor content or to conduct imperceptible influence operations.
  4. Just as there are generalized concerns about supply chains there are also profound worries about the state of computer (in)security. Serious computer vulnerabilities are exposed and exploited on a daily basis. State operators take advantage of vulnerabilities in hardware and software alike to facilitate computer network discovery, exploitation, and attack operations, with operations often divided between formal national security organs, branches of national militaries, and informal state-adjacent (and often criminal) operators. Criminal organizations, similarly, discover and take advantage of vulnerabilities in digital systems to conduct identity theft, steal intellectual property for clients or to sell on black markets, use and monetize vulnerabilities in ransomware campaigns, and otherwise engage in socially deleterious activities.
  5. In aggregate, issues of supply chain management and computer insecurity raise baseline questions of trust: how can we trust that equipment or platforms have not been deliberately modified or exploited to the detriment of Canadian interests? And given the state of computer insecurity, how can we rely on technologies with distributed and international development and production teams? In the rest of this submission, I expand on specific trust-related concerns and identify ways to engender trust or, at the very least, make it easier to identify when we should in fact be less trusting of equipment or services which are available to Canadians and Canadian organizations.
Continue reading

Enforcing Canadian Privacy Laws Against American Social Networking Companies

Photo by Jimmy Emerson

As mentioned previously, I’ve been conducting research with academics at the University of Victoria to understand the relationship(s) between social networking companies’ data access, retention, and disclosure policies for the past several months. One aspect of our work addresses the concept of jurisdiction: what systems of rules mediate or direct how social media companies collect, retain, use, and disclose subscribers’ personal information? To address this question we have taken up how major social networking companies comply, or not, with some of the most basic facets of Canadian privacy law: the right to request one’s own data from these companies. Our research has been supported by funding provided through the Office of the Privacy Commissioner of Canada’s contributions program. All our research has been conducted independently of the Office and none of our findings necessarily reflect the Commissioner’s positions. As part of our methodology, while we may report on our access requests being stymied, we are not filing complaints with the federal Commissioner’s office.

Colin Bennett first presented a version of this paper, titled “Real and Substantial Connections: Enforcing Canadian Privacy Laws Against American Social Networking Companies” at an Asian Privacy Scholars event and, based on comments and feedback, we have revised that work for a forthcoming conference presentation in Malta. Below is the abstract of the paper, as well as a link to the Social Science Research Network site that is hosting the paper.


Any organization that captures personal data in Canada for processing is deemed to have a “real and substantial connection” to Canada and fall within the jurisdiction of the Personal Information Protection and Electronic Documents Act (PIPEDA) and of the Office of the Privacy Commissioner of Canada. What has been the experience of enforcing Canadian privacy protection law on US-based social networking services? We analyze some of the high-profile enforcement actions by the Privacy Commissioner. We also test compliance through an analysis of the privacy policies of the top 23 SNSs operating in Canada with the use of access to personal information requests. Most of these companies have failed to implement some of the most elementary requirements of data protection law. We conclude that an institutionalization of non-compliance is widespread, explained by the countervailing conceptions of jurisdiction inherent in corporate policy and technical system design.

Download the paper at SSRN

Graph Search and ‘Risky’ Communicative Domains

Photo by Lynn Friedman

There have been lots of good critiques and comments concerning Facebook’s recently announced “Graph Search” product. Graph Search lets individuals semantically query large datasets that are associated with data shared by their friends, friends-of-friends, and the public more generally. Greg Satell tries to put the product in context – Graph Search is really a a way for corporations to peer into our lives –  and a series of articles have tried to unpack the privacy implications of Facebook’s newest product.

I want to talk less directly about privacy, and more about how Graph Search threatens to further limit discourse on the network. While privacy is clearly implicated throughout the post, we can think of privacy beyond just a loss for the individual and more about the broader social impacts of its loss. Specifically, I want to briefly reflect on how Graph Search (further?) transforms Facebook into a hostile discursive domain, and what this might mean for Facebook users.

Continue reading

Forgetting, Non-Forgetting and Quasi-Forgetting in Social Networking

Image by fake is the new real

For the past several months I’ve been conducting research with academics at the University of Victoria to understand the relationship(s) between social networking companies’ data access, retention, and disclosure policies. One element of of this research has involved testing whether these networks comply with the Personal Information Protection and Electronic Documents Act; do social networks provide subscribers access to their personal data when a subscriber asks? Another element has involved evaluating the privacy policies of major social networks: how do these companies understand access, retention, and disclosure of subscriber data? We’ve also been investigating how law enforcement agencies access, and use, data from social networking companies. This research has been supported by funding provided through the Office of the Privacy Commissioner of Canada’s contributions program. All our research has been conducted independently of the Office and none of our findings necessarily reflect the Commissioner’s positions.

Colin Bennett presented a draft of one of the academic papers emergent from this research, titled “Forgetting, Non-Forgetting and Quasi-Forgetting in Social Networking: Canadian Policy and Corporate Practices.” It was given at the 2013 Computers, Privacy and Data Protection Conference. Below is the abstract of the paper, as well as a link to the Social Science Research Network site that is hosting the paper.


In this paper we analyze some of the practical realities around deleting personal data on social networks with respect to the Canadian regime of privacy protection. We first discuss the extent to which the European right to be forgotten is, and is not, reflected in Canadian privacy law, in regulation, and in the decisions of the Office of the Privacy Commissioner of Canada. After outlining the limitations of Canadian law we turn to corporate organizational practices. Our analyses of social networking sites’ privacy policies reveal how poorly companies recognize the right to be forgotten in their existing privacy commitments and practices. Next, we turn to Law Enforcement Authorities (LEAs) and how their practices challenge the right because of LEAs’ own capture, processing, and retention of social networking information. We conclude by identifying lessons from the Canadian experience and raising them against the intense transatlantic struggle over the scope of the new Draft Regulation.

Download paper at SSRN (Download from alternate source)

How To Get Your Personal Information From Social Networks

Photo by Evan Long

Canadian news routinely highlights the ‘dangers’ that can be associated with social networking companies collecting and storing information about Canadian citizens. Stories and articles regularly discuss how hackers can misuse your personal information, how companies store ‘everything’ about you, and how collected data is disclosed to unscrupulous third parties. While many of these stories are accurate, insofar as they cover specific instances of harm and risky behaviour, they tend to lack an important next step; they rarely explain how Canadians can get educated on data collection, retention, and disclosure processes.

Let’s be honest: any next step has to be reasonable. Expecting Canadians to flee social media en masse and return to letter writing isn’t an acceptable (or, really, an appropriate) response. Similarly, saying “tighten your privacy controls” or “be careful what you post” are of modest value, at best; many Canadians are realizing that tightening their privacy controls does little when the companies can (and do) change their privacy settings without any notice. This post is inspired by a different next step. Rather than being inspired by fear emergent from ‘the sky is falling’ news stories, what if you were inspired by knowledge that you, yourself, gained? In what follows I walk you through how to compel social networking companies to disclose what information they have about you. In the process of filing these requests you’ll learn a lot more about being a member of these social networking services and, based on what you learn, can decide whether you want to change your involvement with particular social media companies.

I start by explaining why Canadians have a legal right to compel companies to disclose and make available the information that they retain about Canadian citizens. I then provide a template letter that you can send to social networking organizations with which you have a preexisting relationship. This template is, in effect, a tool that you can use to compel companies to disclose your personal information. After providing the template I explain the significance of some of the items contained in it. Next, I outline some of the difficulties or challenges you might have in requesting your personal information and a few ways to counteract those problems. Finally, I explain how you can complain if a company does not meet its legal obligation to provide you with a copy of your personal information. By the end of this post, you’ll have everything you need to request your personal information from the social networking services to which you subscribe. Continue reading