Why Is(n’t) TikTok A National Security Risk?

This image has an empty alt attribute; its file name is pexels-photo-8360440.jpeg
Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, some commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security.
  2. Open source information means information which you or I can find, and read, without requiring a security clearance.

Unpacking NSIRA’s 2020 Annual Report

black and white typewriter on table
Photo by Markus Winkler on Pexels.com

On December 13, 2021, the National Security Intelligence Review Agency (NSIRA) released its 2020 Annual Report. NSIRA is responsible for conducting national security reviews of Canadian federal agencies, and their annual report summarizes activities that have been undertaken in 2020 and also indicates NSIRA’s plans for future work.

I want to highlight three points that emerge from my reading of report:

  1. NSIRA has generally been able to obtain the information it required to carry out its reviews. The exception to this, however, is that NSIRA has experienced challenges obtaining information from the Communications Security Establishment (CSE). It is not entirely clear why this has been the case.
  2. While most of NSIRA’s reviews have been completed in spite of the pandemic, this is not the case with CSE reviews where several remain outstanding.
  3. NSIRA has spent time in the annual report laying out tripwires that, if activated, will alert Canadians and their elected officials to problems that the review agency may be experiencing in fulfilling its mandate. It is imperative that observers pay close attention to these tripwires in future reviews. However, while these tripwires are likely meant to demonstrate the robustness of NSIRA reviews they run the risk of undermining review conclusions if not carefully managed.

In this post, I proceed in the order of the annual review and highlight key items that stood out. The headings used in this post, save for analysis headings, are correlated with the headings of the same name in the annual report itself.

Continue reading

Reflections on “Foreign Interference: Threats to Canada’s Democratic Process”

crop hacker typing on laptop with data on screen
Photo by Sora Shimazaki on Pexels.com

It is widely expected that Canadians will be going to the polls in the next few months. In advance of the election the Canadian Security Intelligence Service (CSIS) has published an unclassified report entitled, “Foreign Interference: Threats to Canada’s Democratic Process.”1 

In this post I briefly discuss some of the highlights of the report and offer some productive criticism concerning who the report and its guidance is directed at, and the ability for individuals to act on the provided guidance. The report ultimately represents a valuable contribution to efforts to increase the awareness of national security issues in Canada and, on that basis alone, I hope that CSIS and other members of Canada’s intelligence and security community continue to publish these kinds of reports.

Summary

The report generally outlines a series of foreign interference-related threats that face Canada, and Canadians. Foreign interference includes, “attempts to covertly influence, intimidate, manipulate, interfere, corrupt or discredit individuals, organizations and governments to further the interests of a foreign country” and are, “carried out by both state and non-state actors” towards, “Canadian entities both inside and outside of Canada, and directly threaten national security” (Page 5). The report is divided into sections which explain why Canada and Canadians are targets of foreign interference, the types of foreign states’ goals, who might be targeted, and the techniques that might be adopted to apply foreign interference and how to detect and avoid such interference. The report concludes by discussing some of the election-specific mechanisms that have been adopted by the Government of Canada to mitigate the effects and effectiveness of foreign interference operations.

On the whole this is a pretty good overview document. It makes a good academic teaching resource, insofar as it provides a high-level overview of what foreign interference can entail and would probably serve as a nice kick off to discuss the topic of foreign interference more broadly.2

Continue reading

NSIRA Calls CSE’s Lawfulness Into Question

photo of cryptic character codes and magnifying glass on table top
Photo by cottonbro on Pexels.com

On June 18, 2021, the National Security Intelligence Review Agency (NSIRA) released a review of how the Communications Security Establishment (CSE) disclosed Canadian Identifying Information (CII) to domestic Canadian agencies. I draw three central conclusions to the review.

  1. CSE potentially violated the Privacy Act, which governs how federal government institutions handle personal information.
  2. The CSE’s assistance to the Canadian Security Intelligence Service (CSIS) was concealed from the Federal Court. The Court was responsible for authorizing warrants for CSIS operations that the CSE was assisting with.
  3. CSE officials may have misled Parliament in explaining how the assistance element of its mandate was operationalized in the course of debates meant to extend CSE’s capabilities and mandate.

In this post I describe the elements of the review, a few key parts of CSE’s response it, and conclude with a series of issues that the review and response raise.

Background

Under the National Defence Act, CSE would incidentally collect CII in the course of conducting foreign signals intelligence, cybersecurity and information assurance, and assistance operations. From all of those operations, it would produce reports that were sent to clients within the Government of Canada. By default, Canadians’ information is expected to be suppressed but agencies can subsequently request CSE to re-identify suppressed information.

NSIRA examined disclosures of CII which took place between July 1, 2015 – July 31, 2019 from CSE to all recipient government departments; this meant that all the disclosures took place when the CSE was guided by the National Defense Act and the Privacy Act.1 In conducting their review NSIRA looked at, “electronic records, correspondence, intelligence reports, legal opinions, policies, procedures, documents pertaining to judicial proceedings, Ministerial Authorizations, and Ministerial Directives of relevance to CSE’s CII disclosure regime” (p. 2). Over the course of its review, NSIRA engaged a range of government agencies that requested disclosures of CII, such as the Royal Canadian Mounted Police (RCMP) and Innovation Science and Economic Development Canada (ISED). NSIRA also assessed the disclosures of CII to CSIS and relevant CSIS’ affidavits to the Federal Court.

Continue reading

Dissecting CSIS’ Statement Concerning Indefinite Metadata Retention

PR? by Ged Carrol (CC BY 2.0) https://flic.kr/p/6jshtz

PR? by Ged Carrol (CC BY 2.0) https://flic.kr/p/6jshtz

In this brief post I debunk the language used by CSIS Director Michel Coulombe in his justification of CSIS’s indefinite data retention program. That program involved CSIS obtaining warrants to collect communications and then, unlawfully, retaining the metadata of non-targeted persons indefinitely. This program was operated out of the Operational Data Analysis Centre (ODAC). A Federal Court judge found that CSIS’ and the Department of Justice’s theories for why the program was legal were incorrect: CSIS had been retaining the metadata, unlawfully, since the program’s inception in 2006. More generally, the judge found that CSIS had failed to meet its duty of candour to the court by failing to explain the program, and detail its existence, to the Court.

The public reactions to the Federal Court’s decision has been powerful, with the Minister of Public Safety being challenged on CSIS’s activities and numerous mainstream newspapers publishing stories that criticize CSIS’ activities. CSIS issued a public statement from its Director on the weekend following the Court’s decision, which is available at CSIS’ website. The Federal Court’s decision concerning this program is being hosted on this website, and is also available from the Federal Court’s website. In what follows I comprehensively quote from the Director’s statement and then provide context that, in many cases, reveals the extent to which the Director’s statement is designed to mislead the public.

Continue reading

‘Defending the Core’ of the Network: Canadian vs. American Approaches

U.S. Cyber Command recently conducted on Fort Meade its first exercise in collaboration with cyber subject-matter experts from across the National Security Agency, National Guard, Department of Homeland Security and FBI.In our recent report, “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” we discussed how the Communications Security Establishment (CSE) developed and deployed a sensor network within domestic and foreign telecommunications networks. While our report highlighted some of the concerns linked to this EONBLUE sensor network, including the dangers of secretly extending government surveillance capacity without any public debate about the extensions, as well as how EONBLUE or other CSE programs programs collect information about Canadians’ communications, we did not engage in a comparison of how Canada and its closest allies monitor domestic network traffic. This post briefly describes the EONBLUE sensor program, what may be equivalent domestic programs in the United States, and the questions that emerge from contrasting what we know about the Canadian and American sensor networks.

What is EONBLUE?

EONBLUE was developed and deployed by the CSE. The CSE is Canada’s premier signals intelligence agency. The EONBLUE sensor network “is a passive SIGINT system that was used to collect ‘full-take’ data, as well as conduct signature and anomaly based detections on network traffic.” Prior Snowden documents showcased plans to integrate EONBLUE into government networks; the network has already been integrated into private companies’  networks. Figure one outlines the different ‘shades of blue’, or variations of the EONBLUE sensors:

EONBLUE Sensors

Continue reading