Over the past several months I’ve had the distinct honour to work with, and learn from, a number of close colleagues and friends on the topic of surveillance and censorship that takes place on WeChat. We have published a report with the Citizen Lab entitled, “We Chat, They Watch: How International Users Unwittingly Build up WeChat’s Chinese Censorship Apparatus.” The report undertook a mixed methods approach to understand how non-China registered WeChat accounts were subjected to surveillance which was, then, used to develop a censorship list that is applied to users who have registered their accounts in China. Specifically, the report:
Presents results from technical experiments which reveal that WeChat communications conducted entirely among non-China-registered accounts are subject to pervasive content surveillance that was previously thought to be exclusively reserved for China-registered accounts.
Documents and images transmitted entirely among non-China-registered accounts undergo content surveillance wherein these files are analyzed for content that is politically sensitive in China.
Upon analysis, files deemed politically sensitive are used to invisibly train and build up WeChat’s Chinese political censorship system.
From public information, it is unclear how Tencent uses non-Chinese-registered users’ data to enable content blocking or which policy rationale permits the sharing of data used for blocking between international and China regions of WeChat.
Tencent’s responses to data access requests failed to clarify how data from international users is used to enable political censorship of the platform in China.
A considerable number of today’s copyfight discussions revolve around the usage of DRM to prevent transformative uses of works, to prevent the sharing of works, and to generally limit how individuals engage with the cultural artefacts around them. This post takes a step back from that, thinking through the significance of transforming ‘classic’ works of the English literary canon instead of looking at how new technologies butt heads against free speech. Specifically, I want to argue that NewSouth, Inc.’s decision to publish Huckleberry Finn without the word “nigger” – replacing it with “slave” – demonstrates the importance of works entering the public domain. I restrain from providing a normative framework to evaluate NewSouth’s actual decision – whether changing the particular word is good – and instead use their decision to articulate the conditions constituting ‘bad’ transformations versus ‘good’ transformations of public domain works. I will argue that uniform, uncontested, and totalizing modifications of public domains works is ‘bad’, whereas localized, particular, and discrete transformations should be encouraged given their existence as free expressions capable of (re)generating discussions around topics of social import.
Copyright is intended to operate as an engine to generating expressive content. In theory, by providing a limited monopoly over expressions (not the ideas that are expressed) authors can receive some kind of restitution for the fixed costs that they invest in creating works. While true (especially in the digital era) that marginal costs trend towards zero, pricing based on marginal cost alone fails to adequately account for the sunk costs of actual writing. Admittedly, some do write for free (blogs and academic articles in journals might stand as examples) but many people still write with the hope earning their riches through publications. There isn’t anything wrong with profit motivating an author’s desire to create.
In the current CRTC hearings over Canadian ISPs’ use of Deep Packet Inspection (DPI) to manage bandwidth, I see two ‘win situations’ for the dominant carriers:
They can continue to throttle ‘problem’ applications in the future;
The CRTC decides to leave the wireless market alone right now.
I want to talk about the effects of throttling problem applications, and how people talking about DPI should focus on the negative consequences of regulation (something that is, admittedly, often done). In thinking about this, however, I want to first attend to the issues of censorship models to render transparent the difficulties in relying on censorship-based arguments to oppose uses of DPI. Following this, I’ll consider some of the effects of regulating access to content through protocol throttling. The aim is to suggest that individuals and groups who are opposed to the throttling of particular application-protocols should focus on the effects of regulation, given that it is a more productive space of analysis and argumentation, instead of focusing on DPI as an instrument for censorship.
Let’s first touch on the language of censorship itself. We typically understand this action in terms of a juridico-discursive model, or a model that relies on rules to permit or negate discourse. There are three common elements to this model-type:
I’ve recently had the pleasure of reading some of Foucault’s Society Must be Defended. Over the course of the book Foucault will be radically changing his early positions, and I hope to note and discuss these changes as I come across them. This said, I’ve recently finished the first lecture and wanted to reflect on the power of genealogies, the fragmented character of the ‘net, and synthesize that with Wu and Goldsmith’s account of the Internet and Foucault’s own thoughts on power as repression. There’s a lot to do, but I think that it might be very profitable to at least toy around with this for a bit.
There is a tendency to try and capture knowledge in unitary architectures. Foucault equates this to trying to develop a unifying concept to explain the behaviour of each droplet of water that explodes from around a sperm whale when it breeches. In the very process of establishing a complex formula to receive this information, the act itself is lost.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).