Transparent Practices Don’t Stop Prejudicial Surveillance

In February I’m attending iConference 2012, and helping to organize a workshop titled “Networked Surveillance: Access Control, Transparency, Power, and Circumvention in the 21st Century.” The workshop’s participants will consider whether networked surveillance challenges notions of privacy and neutrality, exploits openness of data protocols, or requires critical investigations into how these surveillance technologies are developed and regulated. Participants will be arriving from around the world, and speaking to one (or more) of the workshop’s four thematics: Access Control, Transparency, Power, and Circumvention. As part of the workshop, all participants must prepare a short position statement that identifies their interest in network surveillance while establishing grounds to launch a conversation. My contribution, titled “Transparent Practices Don’t Stop Prejudicial Surveillance,” follows.

Transparent Practices Don’t Stop Prejudicial Surveillance

Controversies around computer processing and data analysis technologies led to the development of Fair Information Practice Principles (FIPs), principles that compose the bedrocks of today’s privacy codes and laws. Drawing from lessons around privacy codes and those around Canadian ISPs’ surveillance practices, I argue that transparency constitutes a necessary but insufficient measure to mitigate prejudicial surveillance practices and technologies. We must go further and inject public values into development cycles while also intentionally hobbling surveillance technologies to rein in their most harmful potentialities.

Lesson Drawing from Privacy Principles and Codes

FIPs are used to make organizations accountable for how and why information is collected, for how information is processed, and for the accuracy of retained information. It is contestable that FIPs, however integrated into policy and law, are effective in preventing surveillance technologies and practices so much as they legitimize them. As noted by Rule, codes based on FIPs “help surveillance systems to achieve their intended ends more fairly and openly” but do not “help us decide when institutional appetites for personal information simply go too far.”[1] Privacy and data protection rules and laws may make data collection and processing activities more transparent while simultaneously failing to “significantly reduce or mitigate the amount of potentially damaging social sorting that occurs.”[2] Moreover, codes and principles are commonly bound within legal privacy protections that “tend to be more circumscribed than the subjective experience of violation associated with new forms of surveillance.”[3] The law simply doesn’t keep up with, or adequately address, the surveillance-related harms and injustices that people experience on a regular basis.

While codes based on FIPs might limit data collection and empower end-users when users know they are exchanging data with specific data collectors, such codes “work less well in systems in which devices blab information indiscriminately so that there’s no way to identify a class of information collectors who can be made subject to the rules.”[4] The Internet, and the devices that silently communicate with data collectors via the Internet, constitutes a space where FIPs minimally limit the spread of surveillance technologies and practices. Even if organizations are held accountable for the data they analyze and process, end-users’ abilities to ascertain who and what is collecting and processing information is limited. Formalized privacy rules, in other words, can influence the fairness of surveillance but are less likely to stop the surveillance practices themselves.

Canadian ‘Consequences’ of Rendering Surveillance Transparent

FIPs’ effectiveness in stopping the spread of novel surveillance processes and practices, and limiting their harms, is mirrored by efforts in Canada to mediate ISPs’ surveillance technologies and practices. Numerous Canadian ISPs use deep packet inspection (DPI) systems to inspect and analyze Canadians’ encrypted and unencrypted data transmissions. Such systems evaluate data transmission protocols (e.g. SMTP, HTTP/HTTPS) and, depending on how the systems are configured, can conduct content and flow analyses, as well as modify and interrupt packets flows in real-time.[5] In light of significant opposition to DPI the Canadian Radio-television Telecommunications Commission (CRTC) and Office of the Privacy Commissioner of Canada (OPC) investigated DPI-related practices. Both bodies established provisions to limited how ISPs could employ the technology. Despite both organizations requiring ISPs to publicly declare how they use DPI, ISPs have regularly acted beyond their publicly stated practices. These companies have not been transparent with consumers nor with regulators, nor have breeches of government provisions led to serious punishments.[6] In effect, consumer and governmental awareness of the technology has had limited effects on preventing of harmful uses.[7] Rather than stopping prejudicial actions that limit online speech and association, the CRTC and OPC legitimized some practices while seemingly having had limited effect on ISPs’ extensions of practices beyond regulator- and commissioner-established limits. Transparency helps to understand (some of) what is happening in Canada’s telecommunications networks but has not stopped bad practices, prevented fungible surveillance technologies from being widely deployed, nor led to consequences for secretive extensions of DPI-related practices.

Hobbling Fungible Surveillance Technologies and Stopping Unjust Practices

There isn’t a positive link between knowledge and power, especially when speaking of political or social power. Knowledge constitutes one of many elements that frame power relations.[8] That said, by empowering those with knowledge to influence technical developments at product development rather than implementation phases we might rein in particularly expansive network surveillance tools and jettison such systems’ prejudicial capabilities. Such empowerment might include having public policy advocates who are versed in human and civil rights involved during the earliest phases of technical design processes. They could inject public concerns and values into development processes and excise coding mechanisms that challenge basic democratic values. Moreover, we could require inefficiencies in technical surveillance devices to minimize their capabilities to threaten basic social values: rather than simply guarding against particular practices in policy, we could mandate that surveillance products include limitations that are technically challenging to overcome. The ultimate aim of such limitations is to restrain surveillance technologies’ fungibility and thus increase the friction of expanding their uses. Such intentional injections of friction, combined with public advocates being involved in development processes, could hobble the growth of surveillance practices. Putting emphases on limiting surveillance capabilities at development stages, and thus limiting such technologies’ capabilities, would be a positive step beyond current data protection regimes, which tend to influence the fairness of surveillance technologies and practices rather than stopping them altogether.


[1] J. B. Rule. (2007). Privacy in Peril. Toronto: Oxford University Press. Pp. 27.

[2] D. Lyon. (2007). Surveillance Studies: An Overview. Cambridge, UK: Polity Press. Pp. 173.

[3] K. D. Haggerty and R. V. Ericson. (2007). “The New Politics of Surveillance and Visibility,” in Kevin D. Haggerty and Richard V. Ericson (Eds). The New Politics of Surveillance and Visibility. Toronto: The University of Toronto Press. Pp. 9.

[4] J. Weinberg. (2008). “RFID and Privacy,” in A. Chander, L. Gelman, M. J. Radin (Eds.) Securing Privacy in the Internet Age. Stanford: Stanford Law Books. Pp. 263-264.

[5] C. Parsons. (2011). “Deep Packet Inspection” Big Brother Incorporated research site. Published November 30, 2011. Available: <>

[6] M. Geist. (2011). “Canada’s Net Neutrality Enforcement Failures,” Michael Geist. Published July 8, 2011. Available: <>

[7] While there have been some successes – Rogers Communications Ltd. may face some fines for their behaviors – it should be noted that it has taken over a year to raise an issue to the CRTC, and the process for investigating and disciplining the company has yet to conclude. See: N. Kyonka. (2011). “Whitelisting, an ISP solution to throttling, may conflict with net neutrality rules,” The Wire Report. Published Sept 27, 2011. Available: <>

[8] L. Winner. (1986). The Whale and the Reactor. Chicago: University of Chicago Press. Pp. 109-110.