Why Is(n’t) TikTok A National Security Risk?

This image has an empty alt attribute; its file name is pexels-photo-8360440.jpeg
Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, some commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security.
  2. Open source information means information which you or I can find, and read, without requiring a security clearance.

Recommended Books from 2011 Readings

BookDespite some cries that the publishing industry is at the precipice of financial doom, it’s hard to tell based on the proliferation of texts being published year after year. With such high volumes of new works being produced it can be incredibly difficult to sort the wheat from the chaff.  Within scholarly circles it (sometimes) becomes readily apparent what books are above middling quality by turning to citation indices, but outside of such (often paywall protected) circles it can be more challenging to ascertain what texts are clearly worth reading and which are not.

While I can hardly claim to speak with the weight of scholarly indices, I do read (and rate) a prolific number of texts each year. In what follows, I offer a list of the ‘best’ books that I read through 2011. Some are thought-provoking, others were important in how I understood various facets of the policy process, and still others offer interesting tidbits of information that have until now been hidden in shadow. For each book I’ll identify it’s main aim and a few points about what made the book compelling enough to get onto my list. Texts are not arranged in any particular ranking order and all should be available through your preferred book seller.

Continue reading

Vancouver’s Human Flesh Search Engine

Photo by Richard Eriksson

I don’t like violence, vandalism, or other actions that generally cause destruction. Certainly there are cases where violent social dissent is a sad but important final step to fulfil a much needed social change (e.g. overthrowing a ruinous dictator, tipping the scale to defend or secure essential civil rights) but riotous behaviour following a hockey game lacks any legitimating force. Unfortunately, in the aftermath of game seven between the Vancouver Canucks and Boston Bruins a riot erupted in downtown Vancouver that caused significant harm to individuals and damage to the urban environment.

The riot itself is a sad event. What is similarly depressing is the subsequent mob mentally that has been cheered on by the social media community. Shortly after the riot, prominent local bloggers including Rebecca Bollwitt linked to social media websites and encouraged readers/visitors to upload their recordings and identify those caught on camera. In effect, Canadians were, and still are, being encouraged by their peers and social media ‘experts’ to use social media to locally instantiate a human flesh search engine (I will note that Bollwitt herself has since struck through her earliest endorsement of mob-championing). Its manifestation is seemingly being perceived by many (most?) social media users as a victory of the citizenry and inhabitants of Vancouver over individuals alleged to have committed crimes.

Perhaps unsurprisingly, I have significant issues with this particular search engine. In this post, I’m going to first provide a brief recap of the recent events in Vancouver and then I’ll quickly explain the human flesh search engine (HFSE), both how it works and the harms it can cause. I’m going to conclude by doing two things: first, I’m going to suggest that Vancouver is presently driving a local HFSE and note the prospective harms that may befall those unfortunate enough to get caught within its maw. Second, I’m going to suggest why citizens are ill-suited to carry out investigations that depend on social media-based images and reports.

Continue reading

References for ‘Putting the Meaningful into Meaningful Consent’

By Stephanie BoothDuring my presentation last week at Social Media Club Vancouver – abstract available! – I drew from a large set of sources, the majority of which differed from my earlier talk at Social Media Camp Victoria. As noted earlier, it’s almost impossible to give full citations in the middle of a talk, but I want to make them available post-talk for interested parties.

Below is my keynote presentation and list of references. Unfortunately academic paywalls prevent me from linking to all of the items used, to say nothing of chapters in various books. Still, most of the articles should be accessible through Canadian university libraries, and most of the books are in print (if sometimes expensive).

I want to thank Lorraine Murphy and Cathy Browne for inviting me and doing a stellar job of publicizing my talk to the broader media. It was a delight speaking to the group at SMC Vancouver, as well as to reporters and their audiences across British Columbia and Alberta.

Keynote presentation [20.4MB; made in Keynote ’09]

References

Bennett, C. (1992). Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Ithica: Cornell University Press.

Bennett, C. (2008).  The Privacy Advocates:  Resisting the Spread of Surveillance.  Cambridge, Mass:  The MIT Press.

Carey, R. and Burkell, J. (2009). ‘A Heuristics Approach to Understanding Privacy-Protecting Behaviors in Digital Social Environments’, in I. Kerr, V. Steeves, and C. Lucock (eds.). Lessons From the Identity Trail: Anonymity, Privacy and Identity in a Networked Society. Toronto: Oxford University Press. 65-82.

Chew, M., Balfanz, D., Laurie, B. (2008). ‘(Under)mining Privacy in Social Networks’, Proceedings of W2SP Web 20 Security and Privacy: 1-5.

Fischer-Hübner, S., Sören Pettersson, J. and M. Bergmann, M. (2008). “HCI Designs for Privacy-Enhancing Identity Management’, in A. Acquisti and S. Gritzalis (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications. 229-252.

Flaherty, D. (1972). Privacy in Colonial England. Charlottesville, VA: University Press of Virginia.

Hoofnagle, Chris; King, Jennifer; Li, Su; and Turow, Joseph. (2010). “How different are young adults from older adults when it comes to information privacy attitudes and policies?” available at: http://www.ftc.gov/os/comments/privacyroundtable/544506-00125.pdf

Karyda, M., Koklakis, S. (2008). ’Privacy Perceptions among Members of Online Communities‘, in A. Acquisti and S. Gritzalis (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications, 253-266.

Kerr, I., Barrigar, J., Burkell, J, and Black K. (2009). ‘Soft Surveillance, Hard Consent: The Law and Psychology of Engineering Consent’, in I. Kerr, V. Steeves, and C. Lucock (eds.). Lessons From the Identity Trail: Anonymity, Privacy and Identity in a Networked Society. Toronto: Oxford University Press. 5-22.

Marwick, A. E., Murgia-Diaz, D., and Palfrey Jr., J. G. (2010). ‘Youth, Privacy and Reputation (Literature Review)’. Berkman Center Research Publication No. 2010-5; Harvard Law Working Paper No. 10-29. URL: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1588163

O’Reilly, T, and Battelle, J. (2008), ‘Web Squared: Web 2.0 Five Years On’. Presented at Web 2.0 Summit 2009, at http://www.web2summit.com/web2009/public/schedule/detail/10194

Steeves, V. (2009). ‘Reclaiming the Social Value of Privacy‘, in I. Kerr, V. Steeves, and C. Lucock (eds). Privacy, Identity, and Anonymity in a Network World: Lessons from the Identity Trail. New York: Oxford University Press.

Steeves, V, and Kerr, I. (2005). ‘Virtual Playgrounds and Buddybots: A Data-Minefield for Tweens‘, Canadian journal of Law and Technology 4(2), 91-98.

Turow, Joseph; King, Jennifer; Hoofnagle, Chris Jay; Bleakley, Amy; and Hennessy, Michael. (2009). “Contrary to what marketers say Americans reject tailored advertising and three activities that enable it,” Available at: http://graphics8.nytimes.com/packages/pdf/business/20090929-Tailored_Advertising.pdf

Turow, Joseph. (2007). “Cracking the Consumer Code: Advertisers, Anxiety, and Surveillance in the Digital Age,” in The New Politics of Surveillance and Visibility. Toronto: University of Toronto Press

References for Traffic Analysis, Privacy, and Social Media

the-droids-youre-searching-forIn my presentation at Social Media Camp Victoria (abstract available!), I drew heavily from various academic literatures and public sources. Given the nature of talks, it’s nearly impossible to cite as you’re talking without entirely disrupting the flow of the presentation. This post is an attempted end-run/compromise to that problem: you get references and (what was, I hope) a presentation that flowed nicely!

There is a full list of references below, as well as a downloadable version of my keynote presentation (sorry powerpoint users!). As you’ll see, some references are behind closed academic paywalls: this really, really, really sucks, and is an endemic problem plaguing academia. Believe me when I say that I’m as annoyed as you are that the academic publishing system locks up the research that the public is paying for (actually, I probably hate it even more than you do!), but unfortunately I can’t do much to make it more available without running afoul of copyright trolls myself. As for books that I’ve drawn from, there are links to chapter selections or book reviews where possible.

Keynote presentation [4.7MB; made in Keynote ’09]

References:

Breyer, P. (2005). ’Telecommunications Data Retention and Human Rights: The Compatibility of Blanket Traffic Data Retention with the ECHR‘. European Law Journal 11: 365-375.

Chew, M., Balfanz, D., Laurie, B. (2008). ‘(Under)mining Privacy in Social Networks’, Proceedings of W2SP Web 20 Security and Privacy: 1-5.

Danezis, G. and Clayton, R. (2008). ‘Introducing Traffic Analysis‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerback Publications. 95-116.

Elmer, G. (2004). Profiling Machines: Mapping the Personal Information Economy. Cambridge, Mass.: The MIT Press.

Friedman, L. M. (2007). Guarding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy. Stanford: Stanford University Press. [Excellent book review of text]

Gandy Jr., O. H. (2006). ‘Data Mining, Surveillance, and Discrimination in the Post-9/11 Environment‘, in K. D. Haggerty and R. V. Ericson (eds.). The New Politics of Surveillance and Visibility. Toronto: University of Toronto Press, 79-110. [Early draft presented to the Political Economy Section, IAMCR, July 2002]

Kerr, I. (2002). ‘Online Service Providers, Fidelity, and the Duty of Loyalty‘, in T. Mendina and B. Rockenback (eds). Ethics and Electronic Information. Jefferson, North Carolina: McFarland Press.

Mitrou, L. (2008). ’Communications Data Retention: A Pandora’s Box for Rights and Liberties‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications, 409-434.

Rubinstein, I., Lee, R. D., Schwartz, P. M. (2008). ‘Data Mining and Internet Profiling: Emerging Regulatory and Technological Approaches‘. University of Chicago Law Review 75 261.

Saco, D. (1999). ‘Colonizing Cyberspace: National Security and the Internet’, in J. Weldes, M. Laffey, H. Gusterson, and R. Duvall (eds). Cultures of Insecurity: States, Communities, and the Production of Danger. Minneapolis: University of Minnesota Press, 261-292. [Selection from Google Books]

Simmons, J. L. (2009). “Buying You: The Government’s Use of Forth-Parties to Launder Data about ‘The People’,” in Columbia Business Law Review 2009/3: 950-1012.

Strandburg, K. J. (2008). ’Surveillance of Emergent Associations: Freedom of Associations in a Network Society‘, in A. Acquisti, S. Gritzalis, C. Lambrinoudakis, and S. D. C. di Vimercati (eds.). Digital Privacy: Theory, Technologies, and Practices. New York: Auerbach Publications. 435-458.

Winner, L. (1986). The Whale and the Reactor. Chicago: University of Chicago Press. [Book Review]

Zittrain, J. (2008). The Future of the Internet: And How to Stop It. New Haven: Yale University Press. [Book Homepage]

Forthcoming Talk at Social Media Club Vancouver

Head-On-VancouverI’ve been invited to talk to Vancouver’s vibrant Social Media Club on October 7! I’m thrilled to be presenting, and will be giving a related (though very different) talk from the one a few days earlier at Social Media Camp Victoria. Instead of making traffic analysis a focus, I’ll be speaking more broadly of what I’ll be referring to as a ‘malaise of privacy’. This general discomfort of moving around online is (I will suggest) significantly related to the opaque privacy laws and protections that supposedly secure individuals’ privacy online as contrasted against the daily reality of identity theft, data breaches, and so forth. The thrust will be to provide those in attendance with the theoretical background to develop their own ethic(s) of privacy to make legal privacy statements more accessible and understandable.

See below for the full abstract:

Supplementing Privacy Policies with a Privacy Ethic

Social media platforms are increasingly common (and often cognitively invisible) facets of Western citizens’ lives; we post photos to Facebook and Flickr, engage in conversations on Orkut and Twitter, and relax by playing games on Zynga and Blizzard infrastructures. The shift to the Internet as a platform for mass real-time socialization and service provision demands a tremendous amount of trust on the part of citizens, and research indicates that citizens are increasingly concerned about whether their trust is well placed. Analytics, behavioural advertising, identity theft, and data mismanagements strain the public’s belief that digital systems are ‘privacy neutral’ whilst remaining worried about technological determinisms purported to drive socialized infrastructures.

For this presentation, I begin by briefly reviewing the continuum of the social web, touching on the movement from Web 1.0 to 2.0, and the future as ‘Web Squared’. Next, I address the development of various data policy instruments intended to protect citizens’ privacy online and that facilitate citizens’ trust towards social media environments requiring personal information as the ‘cost of entry’. Drawing on academic and popular literature, I suggest that individuals participating in social media environments care deeply about their privacy and distrust (and dislike) the ubiquity of online surveillance, especially in the spaces they communicate and play. Daily experiences with data protection – often manifest in the form of privacy statements and policies – are seen as unapproachable, awkward, and obtuse by most social media users. Privacy statements and their oft-associated surveillance infrastructures contributes to a broader social malaise surrounding the effectiveness of formal data protection and privacy laws.

Given the presence of this malaise, and potential inability of contemporary data protection laws to secure individuals’ privacy, what can be done? I suggest that those involved in social media are well advised to develop an ethic of privacy to supplement legally required privacy statements. By adopting clear statements of ethics, supplemented with legal language and opt-in data disclosures of personal information, operators of social media environments can be part of the solution to society’s privacy malaise. Rather than outlining an ethic myself, I provide the building blocks for those attending to establish their own ethic. I do this by identifying dominant theoretical approaches to privacy: privacy as a matter of control, as an individual vs community vs hybrid issue, as an issue of knowledge and agency, and as a question of contextual data flows. With an understanding of these concepts, those attending will be well suited to supplement their privacy statements and policies with a nuanced and substantive ethics of privacy.