The Utility of Secret Intelligence in Secret-Intelligence Resistant Political and Bureaucratic Cultures

Dan Lomas’ recent RUSI essay, “The Death of Secret Intelligence? Think Again,” is a good and fair assessment of the value of secret intelligence and open source intelligence. Lomas clearly and forcefully explains the real benefits of secret intelligence for a subset of policymakers and decision makers. You should read it.

To truly take advantage of secret intelligence, however, policymakers and decision makers must want to read and use it. Secret intelligence-resistant (SI-resistant) bureaucratic or political cultures that have seemingly managed—and still do—without substantive amounts of secret intelligence to guide policy analysis or decision making may be dubious of the value of secret intelligence. Members of these cultures may see open source intelligence as either sufficient or ‘good enough’ for their purposes.1

Those who attempt to reform SI-resistant cultures must grapple with what may be conflicting long-term perceptions of the value (or lack thereof) of this intelligence. Members of this resistant culture can sometimes become even more avoidant of state secrets by merit of fearing the consequences of knowing or having access to them: when knowing secret intelligence is perceived as being linked to an inability to do much with it, for fear of burning sources and methods and then suffering untold professional or political harms, there are good political and bureaucratic reasons to do without the secret stuff. In these kinds of cultures, there is a risk (real or imagined) that secret intelligence can be toxic to one’s career or future ambitions.

It is in this kind of toxic environment that knowing state secrets may be seen as a problem calling for solutions. Decision makers might have to undertake parallel construction to develop secret intelligence-adjacent fact patterns to justify the conclusions at which they arrived, when those conclusions were in fact guided by secret intelligence. And integrating useful state secrets into policy advice could prevent the circulation of that advice within the government, with the effect of barring uncleared colleagues and managers from the secret intelligence-enhanced (and potentially career enhancing) insights. Not circulating one’s work could mean that a highly capable policy analyst cannot catch the attention of their uncleared managers or directors who may be helpful for lifting the analyst and their career to the next bureaucratic height. Members of the SI-resistant class might wonder whether secrets are really all that they’re cracked up to be.2

This gulf of doubt, the questions of utility, and the practical ‘do we really need to change questions’ are challenging issues to overcome in SI-resistant cultures. Perhaps one way forward, though one which somewhat comically requires overcoming certain preferences for government secrecy around access to documents, is to open the vaults (or Archives) of historical secret information.

In cultures which value secret information we can read and watch insider and expert (and…not so expert) explanations, movies, and valourizations of the merit of secret intelligence in transforming a country’s position in the world. This kind of storytelling may be a key ingredient in developing a political and bureaucratic culture that recognizes the value of incorporating secret intelligence more regularly into routine government affairs. Just pointing at bureaucratic and political cultures that are more open to using secret intelligence, however, and saying ‘mimic them!’ is unlikely to drive much change in a culture that has long been secret intelligence-resistant.

Thus, while the RUSI article does an excellent job trumpeting the value of secret and open source intelligence, the advice and findings really may principally apply to countries with high numbers of security cleared decision makers and where the public—and thus elected politicians—acknowledge the value of secret intelligence amongst the oceans of open source materials that exists around them. And even when there is an appetite for secret intelligence it must be practical to access it.

In some secret intelligence-resistant cultures, there have long been processes where secret intelligence-laden analyst reports have been deposited on non-experts’ desks. Those same non-expects know that if they read the materials they may face possible jeopardy. On the one hand, they largely cannot disclose what they learn but, on the other, if they do not read the materials and that becomes public knowledge then they may be seen as poor stewards of the realm. The responsible ones will dutifully read their briefing books and ensure they never accidentally reveal their secret knowledge to anyone who isn’t in the secret intelligence tribe. Those less responsible might, instead, expect that they wouldn’t be able to use the secret intelligence anyways and ultimately have more hours in their weeks to guide the realm and her interests when they exclusively rely on non-classified information.

As should be obvious, the aforementioned method of circulating secret intelligence does not present a particularly efficacious way of incorporating secret intelligence into government activities. Another way must be found that ideally is developed in at least marginally public settings and in tandem with genuine efforts to open up historical secret archives to historians, academics, and public policy makers to come to their own conclusions about what the value of secret intelligence has actually been. Only once, and if, the SI-resistant culture comes to realize it truly has been missing something are broader cultural changes likely to ensue where that culture’s secret-intelligence resistance at least shifts to secret intelligence-ambivalence. Such would be a small step along a long road towards truly accepting and regularly integrating secret intelligence into the realm’s public affairs.


  1. They may even, largely, be correct. ↩︎
  2. Of course, holding a contrary view are members of invite-only events where a great gnashing of teeth can arise over the ‘secrecy and OSINT problem.’ In these, at least some of the secrecy-indoctrinated participants may even discuss the very question of whether OSINT is truly useful while, ultimately, the room broadly reaches a muttering agreement that the secret intelligence many have spent their careers collecting and enriching really adds a lot of value for decision makers. Even if the same decision makers rarely make use of the information due to their secret intelligence-resistant cultures. Indeed, the gnashing can be enough that a concerned participant might worry that dentists should be on hand to issue mouthguards to some attending participants. ↩︎

Why Is(n’t) TikTok A National Security Risk?

This image has an empty alt attribute; its file name is pexels-photo-8360440.jpeg
Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, some commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security.
  2. Open source information means information which you or I can find, and read, without requiring a security clearance.

Review: The Bridge in the Parks-The Five Eyes and Cold War Counter-Intelligence

There are innumerable books, movies, podcasts, and TV shows that discuss and dramatize the roles of intelligence services during the Cold War. Comparatively few of those media, however, discuss Canada’s role during the same period. Molinaro’s edited volume, The Bridge in the Parks: The Five Eyes and Cold War Counter-Intelligence, goes a way to correcting this deficiency by including five chapters on Canada,1 as well as post-script, in a nine chapter book about Cold War counter-intelligence practices.

The Bridge in the Parks is written by historians who have used archival research and access to information laws to unearth information about a variety of Five Eye security services. The aim of the text as a whole is to “add nuance to what has often been a polarizing historical field in which scholars are forced to choose between focusing on abuses and the overreach of intelligence agencies in the Cold War or discussing successfully prosecuted individuals cases of counter-intelligence. This volume thus seeks to add complexity to this history, more in line with the “grey” world in which counter-intelligence has often existed” (8). On the whole, the book is successful in achieving this aim.

Continue reading

Review: Top Secret Canada-Understanding the Canadian Intelligence and National Security Community

Canadian students of national security have historically suffered in ways that their British and American colleagues have not. Whereas our Anglo-cousins enjoy a robust literature that, amongst other things, maps out what parts of their governments are involved in what elements of national security, Canadians have not had similar comprehensive maps. The result has been that scholars have been left to depend on personal connections, engagements with government insiders, leaked and redacted government documents, and a raft of supposition and logical inferences. Top Secret Canada: Understanding the Canadian Intelligence and National Security Community aspires to correct some of this asymmetry and is largely successful.

The book is divided into chapters about central agencies, core collection and advisory agencies, operations and enforcement and community engagement agencies, government departments with national security functions, and the evolving national security review landscape. Chapters generally adhere to a structure that describes an agency’s mandate, inter-agency cooperation, the resources possessed and needed by the organization, the challenges facing the agency, and its controversies. This framing gives both the book, and most chapters, a sense of continuity throughout.

The editors of the volume were successful in getting current, as well as former, government bureaucrats and policymakers, as well as academics, to contribute chapters. Part One, which discusses the central agencies, were amongst the most revealing. Fyffe’s discussion of the evolution of the National Security Intelligence Advisor’s role and the roles of the various intelligence secretariats, combined with Lilly’s explanation of the fast-paced and issue-driven focus of political staffers in the Prime Minister’s Office, pulls back the curtain of how Canada’s central agencies intersect with national security and intelligence issues. As useful as these chapters are, they also lay bare the difficulty in structuring the book: whereas Fyffe’s chapter faithfully outlines the Privy Council Office per the structure outlined in the volume’s introduction, Lilly’s adopts a structure that, significantly, outlines what government bureaucrats must do to be more effective in engaging with political staff as well as how political staffers’ skills and knowledge could be used by intelligence and security agencies. This bifurcation in the authors’ respective intents creates a tension in answering ‘who is this book for?’, which carries on in some subsequent chapters. Nonetheless, I found these chapters perhaps the most insightful for the national security-related challenges faced by those closest to the Prime Minister.

Continue reading

Accountability and the Canadian Government’s Reporting of Computer Vulnerabilities and Exploits

Photo by Taskin Ashiq on Unsplash

I have a new draft paper that outlines why the Canadian government should develop, and publish, the guidelines it uses when determining whether to acquire, use, or disclose computer- and computer-system vulnerabilities. At its crux, the paper argues that an accountability system was developed in the 1970s based on the intrusiveness of government wiretaps and that state-used malware is just as, if not more so, intrusive. Government agencies should be held to at least as high a standard, today, as they were forty years ago (and, arguably, an even higher one today than in the past). It’s important to recognize that while the paper argues for a focus on defensive cybersecurity — disclosing vulnerabilities as a default in order to enhance the general security of all Canadians and residents of Canada, as well as to improve the security of all government of Canada institutions — it recognizes that some vulnerabilities may be retained to achieve a limited subset of investigative and intelligence operations. As such, the paper does not rule out the use of malware by state actors but, instead, seeks to restrict the use of such malware while also drawing its use into a publicly visible accountability regime.

I’m very receptive to comments on this paper and will seek to incorporate feedback before sending the paper to an appropriate journal around mid-December.

Abstract:

Computer security vulnerabilities can be exploited by unauthorized parties to affect targeted systems contrary to the preferences their owner or controller. Companies routinely issue patches to remediate the vulnerabilities after learning that the vulnerabilities exist. However, these flaws are sometimes obtained, used, and kept secret by government actors, who assert that revealing vulnerabilities would undermine intelligence, security, or law enforcement operations. This paper argues that a publicly visible accountability regime is needed to control the discovery, purchase, use, and reporting of computer exploits by Canadian government actors for two reasons. First, because when utilized by Canadian state actors the vulnerabilities could be leveraged to deeply intrude into the private lives of citizens, and legislative precedent indicates that such intrusions should be carefully regulated so that the legislature can hold the government to account. Second, because the vulnerabilities underlying any exploits could be discovered or used by a range of hostile operators to subsequently threaten Canadian citizens’ and residents’ of Canada personal security or the integrity of democratic institutions. On these bases, it is of high importance that the government of Canada formally develop, publish, and act according to an accountability regime that would regulate its agencies’ exploitation of computer vulnerabilities.

Download .pdf // SSRN Link

SIGINT Summaries Update: Covernames for CSE, GCHQ, and GCSB

Today I have published a series of pages that contain covernames associated with the Communications Security Establishment (CSE), Government Communications Headquarters (GCHQ), and Government Communications Security Bureau (GCSB). Each of the pages lists covernames which are publicly available as well as explanations for what the given covernames refers to, when such information is available. The majority of the covernames listed are from documents which were provided to journalists by Edward Snowden, and which have been published in the public domain. A similar listing concerning the NSA’s covernames is forthcoming.

You may also want to visit Electrospaces.net, which has also developed lists of covernames for some of the above mentioned agencies, as well as the National Security Agency (NSA).

All of the descriptions of what covernames mean or refer to are done on a best-effort basis; if you believe there is additional publicly referenced material derived from CSE, GCHQ, or GCSB documents which could supplement descriptions please let me know. Entries will be updated periodically as additional materials come available.