Relaunch of the SIGINT Summaries

Photo by Brett Sayles on Pexels.com

In 2013, journalists began revealing secrets associated with members of the Five Eyes (FVEY) intelligence alliance. These secrets were disclosed by Edward Snowden, a US intelligence contractor. The journalists who published about the documents did so after carefully assessing their content and removing information that was identified as unduly injurious to national security interests or that threatened to reveal individuals’ identities.

During my tenure at the Citizen Lab I provided expert advice to journalists about the newsworthiness of different documents and, also, when content should be redacted as its release was not in the public interest. In some cases documents that were incredibly interesting were never published on the basis that doing so would be injurious to national security, notwithstanding the potential newsworthiness of the documents in question. As an element of my work, I identified and summarized published documents and covernames which were associated with Canada’s signals intelligence agency, the Communications Security Establishment (CSE).

I am happy to announce a re-launching of the SIGINT summaries but with far more content. Content, today, includes:

In all cases the materials which are summarised on my website have been published, in open-source, by professional news organizations or other publishers. None of the material that I summarise or host is new and none of it has been leaked or provided to me by government or non-government bodies. No current or former intelligence officer has provided me with details about any of the covernames or underlying documents. This said, researchers associated with the Citizen Lab and other academic institutions have, in the past, contributed to some of the materials published on this website.

As a caveat, all descriptions of what the covernames mean or refer to, and what are contained in individual documents leaked by Edward Snowden, are provided on a best-effort basis. Entries will be updated periodically as time is available to analyse further documents or materials.

How Were Documents Summarized?

In assessing any document I have undertaken the following steps:

  1. Re-created my template for all Snowden documents, which includes information about the title, metadata associated with the document (e.g., when it was made public and in what news story, when it was created, which agency created it), and a listing of the covernames listed in the document.
  2. When searching documents for covernames, I moved slowly through the document and, often, zoomed into charts, figures, or other materials in order to decipher both covernames which are prominent in the given document as well as covernames in much smaller fonts. The result of this is that in some cases my analyses of documents have indicated more covernames being present than in other public repositories which have relied on OCR-based methods to extract covernames from texts.
  3. I read carefully through the text of the document, sometimes several times, to try and provide a summary of the highlights in a given document. Note that this is based on my own background and, as such, it is possible that the summaries which are generated may miss items that other readers find notable or interesting. These summaries try and avoid editorialising to the best of my ability.
  4. In a separate file, I have a listing of the given agency’s covernames. Using the listed covernames in the summary, I worked through the document in question to assess what, if anything, was said about a covername and whether what was said is new or expanded my understanding of a covername. Where it did, I added additional sentences to the covername in the listing of the relevant agency’s covernames along with a page reference to source the new information. The intent, here, was to both develop a kind of partial covername decoder and, also, to enable other experts to assess how I have reached conclusions about what covernames mean. This enables them to more easily assess the covername descriptions I have provided.
  5. There is sometimes an editorial process which involved rough third-party copyediting and expert peer review. Both of these, however, have been reliant on external parties having the time and expertise to provide these services. While many of the summaries and covername listings have been copyedited or reviewed, this is not the case for all of them.
  6. Finally, the new entries have been published on this website.

Also, as part of my assessment process I have normalized the names of documents. This has meant I’ve often re-named original documents and, in some cases, split conjoined documents which were published by news organizations into individual documents (e.g., a news organization may have published a series of documents linked to AURORAGOLD as a single .pdf instead of publishing each document or slide deck as its own .pdf). The result is that some of the materials which are published on this website may appear new—it may seem as though there are no other sources on the Internet that appear to host a given document—but, in fact, these are just smaller parts of larger conjoined .pdfs.

Commonly Asked Questions

Why isn’t XXX document included in your list of summarised documents? It’s one of the important ones!

There are a lot of documents to work through and, to some extent, my review of them has been motivated either by specific projects or based on a listing of documents that I have time to assess over the past many years. Documents have not been processed based on when they were published. It can take anywhere from 10 minutes to 5 hours or more to process a given document, and at times I have chosen to focus on documents based on the time available to me or by research projects I have undertaken.

Why haven’t you talked about the legal or ethical dimensions of these documents?

There are any number of venues where I have professionally discussed the activities which have been carried out by, and continue to be carried out by, Western signals intelligence agencies. The purpose of these summaries is to provide a maximally unbiased explanation of what is actually in the documents, instead of injecting my own views of what they describe.

A core problem in discussing the Snowden documents is a blurring of what the documents actually say versus what people think they say, and the appropriateness or legality of what is described in them. This project is an effort to provide a more robust foundation to understand the documents, themselves, and then from there other scholars and experts may have more robust assessments of their content.

Aren’t you endangering national security by publishing this material?

No, I don’t believe that I am. Documents which I summarise and the covernames which I summarise have been public for many, many years. These are, functionally, now historical texts.

Any professional intelligence service worth its salt will have already mined all of these documents and performed an equivalent level of analysis some time ago. Scholars, the public, and other experts however have not had the same resources to similarly analyse and derive value from the documents. In the spirit of open scholarship I am sharing these summaries. I also hope that it is helpful for policymakers so that they can better assess and understand the historical capabilities of some of the most influential and powerful signals intelligence agencies in the world.

Finally, all of the documents, and covernames, which are summarised have been public for a considerable period of time. Programs will have since been further developed or been terminated, and covernames rotated.

What is the narrative across the documents and covernames?

I regard the content published here as a kind of repository that can help the public and researchers undertake their own processes of discovery, based on their own interests. Are you interested in how the FVEY agencies have assessed VPNs, encryption, smartphones, or other topics? Then you could do a search on agencies’ summary lists or covernames to find content of interest. More broadly, however, I think that there is a substantial amount of material which has been synthesised by journalists or academics; these summaries can be helpful to assess their accuracy in discussing the underlying material and, in most cases, the summaries of particular documents link to journalistic reporting that tries to provide a broader narrative to sets of documents.

Why haven’t you made this easier to understand?

I am aware that some of the material is still challenging to read. This was the case for me when I started reading the Snowden documents, and actually led to several revisions of reading/revising summaries as I and colleagues developed a deeper understanding for what the documents were trying to communicate.

To some extent, reading the Snowden documents parallels learning a novel language. As such, it is frustrating to engage with at first but, over time, you can develop an understanding of the structure and grammar of the language. The same is true as you read more of the summaries, underlying documents, and covername descriptions. My intent is that with the material assembled on this website the time to become fluent will be massively reduced.

Future Plans

Over time I hope to continue to add to the summaries, though this will continue as a personal historical project. As such, updates will be made only as I have time available to commit to the work.


  1. As of writing, no reviewed Snowden document explicitly discloses an ASD covername. ↩︎

Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

Earlier this week I published a report, “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19,” alongside co-authors Benjamin Ballard and Amanda Cutinha. The report provides a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies.

At its core, we argue that while there were some events that were truly unprecedented in the pandemic–namely how some consumer surveillance and telecommunications systems were transformed to facilitate pandemic-related surveillance, as well as the prospect of how law reform might alter how personal information could be used in future health emergencies–many of these same events have some historical legacy. The COVID-19 pandemic, however, has revealed a situation where familiar disease management concepts have been supercharged by contemporary networked technologies, and further qualitative shifts could take place if privacy law reform further relax the requirements that organizations must obtain individuals’ consent before handling their personal information.

While we avoid making specific policy prescriptions in this report our message is clear: in the aftermath of COVID-19 it will be critical for policymakers, technologists, and the public writ large to look back at how governments handled the pandemic, and individuals’ personal information, and assess what must be done to better manage future health emergencies while best protecting the civil and human rights of all persons. We hope that our report will contribute, in some small way, to these forthcoming deliberations.


Executive Summary:

Phrases like “[t]he pandemic which has just swept round the earth has been without precedent”1 have been commonly read or heard throughout the COVID-19 pandemic. At the onset of the COVID-19 pandemic, there was a race to restrict mobility, undertake health surveillance to determine the source or cause of local outbreaks, and secure personal protective equipment for healthcare workers and domestic populations. Further and as in past health emergencies, there were efforts to collect and leverage available information to make sense of the spread of the disease, understand the nature of supply chains so as to determine what equipment was available to treat those affected by the disease or provide assistance to those afflicted with it, as well as to understand how the novel coronavirus was transmitted and its effects so as to develop vaccines to mitigate its worst repercussions.

In, “Pandemic Privacy: A preliminary analysis of collection technologies, data collection laws, and legislative reform during COVID-19,” we undertake a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies. In analyzing how data has been collected in the United States, United Kingdom, and Canada, we found that while many of the data collection methods could be mapped onto a trajectory of past collection practices, the breadth and extent of data collection in tandem with how communications networks were repurposed constituted novel technological responses to a health crisis. Similarly, while the intersection of public and private interests in providing healthcare and government services is not new, the ability for private companies such as Google and Apple to forcefully shape some of the technology-enabled pandemic responses speaks to the significant ability of private companies to guide or direct public health measures that rely on contemporary smartphone technologies. While we found that the uses of technologies were linked to historical efforts to combat the spread of disease, the nature and extent of private surveillance to enable public action was arguably unprecedented.

Turning from the technologies involved to collect data, we shift to an analysis of how Canadian law enabled governmental collections, uses, and disclosures of personal information and how legislation that was in force before the outbreak of COVID-19 empowered governments to overcome any legal hurdles that might have prevented state agencies from using data to address COVID-19 in Canada. Despite possessing this lawful authority, however, governments of Canada were often accused of inadequately responding to the pandemic, and they, in turn, sometimes suggested or indicated that privacy legislation impaired their abilities to act. These concerns have precedent insofar as they were raised following the 2003 SARS pandemic, but they were then–as now–found to be meritless: privacy legislation has not been an impediment to data collection, use, or sharing, despite claims to the contrary. The challenges faced by governments across Canada were, in fact, precedented and linked to poor governmental policies and capabilities to collect, use, and share data just as in past health crises. 

Perhaps partially in response to perceptions that privacy rights afforded to Canadians impeded the pandemic response, the federal government of Canada introduced legislation in August 2020 (which ultimately did not get passed into law due to an election) that would both have reified existing exemptions to privacy protections while empowering private companies to collect, use, and disclose personal information for further ‘socially beneficial practices’ without first obtaining individuals’ consent. While it is hardly unprecedented for governments to draft and introduce privacy legislation that would expand how personal information might be used, the exclusion of human rights to balance commercial uses of personal information stands as a novel decision where such legislation is now regularly linked with explicit human rights protections. 

This report proceeds as follows. After a short introduction in Section one, we present the methodologies we used in Section two. Section three turns to how contemporary digital technologies were used to collect data in the United States, United Kingdom, and Canada. Our principal finding is that collection efforts were constrained by the ways in which private companies chose to enable data collection, particularly in the case of contact tracing and exposure notifications, and by how these companies choose to share data that was under their control and how data was repurposed for assisting in containing COVID-19. The breadth and extent of data collection was unprecedented when compared to past health crises.

In Section four, we focus on Canadian legal concerns regarding the extent to which privacy and civil liberties protections affected how the federal and provincial governments handled data in their responses to the COVID-19 pandemic. We find that privacy legislation did not establish any notable legal barriers for collecting, sharing, and using personal information given the permissibility of such activities in health emergencies, as these actions are laid out in provincial health and emergencies laws. More broadly, however, the legislative standard that allows for derogations from consent in emergency situations may be incompatible with individuals’ perceptions of their privacy rights and what they consider to be ‘appropriate’ infringements of these rights, especially when some individuals contest the gravity (or even existence) of the COVID-19 pandemic in the first place.

Section five turns to how next-generation privacy legislation, such as the Consumer Privacy Protection Act (CPPA), might raise the prospect of significant changes in how data could be collected, used, or disclosed in future health crises. The CPPA did not enter into law as a result of a Canadian federal election, which killed the bill on the Order Paper. Nonetheless, we find that a law such as the CPPA could facilitate unprecedented non-consensual handling of personal information.

Section six presents a discussion of the broader themes that cut across the report. These include how the pandemic further reveals the redistribution of power between states and private organizations, the need for novel digital epidemiological processes to have strong bioethics and equitable commitments for those involved in digital epidemiological experiments, and the need to assess the roles of consent in future health emergencies, especially when new legislative frameworks might permit more permissive and non-consensual data collection, use, and disclosure for health-related purposes. Section seven presents a short conclusion to our report.

Footnotes

1. Goerge A. Soper. (1919). “The Lessons of the Pandemic,” Science 49(1274).


Download the full report: “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?

UVic CrestToday, I am happy to make my completed doctoral dissertation available to the public. The dissertation examines what drives, and hinders, wireline network practices that are enabled by Deep Packet Inspection (DPI) routers. Such routers are in wide use by Internet service providers (ISPs) in Canada, the United States, and United Kingdom, and offer the theoretical capacity for service providers to intrusively monitor, mediate, and modify their subscribers’ data packets in real or near-real time. Given the potential uses of the routers, I was specifically interested in how the politics of deep packet inspection intersected with the following issues: network management practices, content control and copyright, advertising, and national security/policing.

Based on the potential capabilities of deep packet inspection technologies – and the warnings that such technologies could herald the ‘end of the Internet’ as it is know by citizens of the West – I explored what has actually driven the uptake of the technology in Canada, the US, and the UK. I ultimately found that though there were variations in different states’ regulatory processes, regulators tended to arrive at common conclusions. Regulatory convergence stands in opposition to the divergence that arose as elected officials entered into the DPI debates: such officials have been guided by domestic politics, and tended to reach significantly different conclusions. In effect, while high-expertise regulatory networks reached common conclusions, elected political officials have demonstrated varying degrees of technical expertise and instead have focused on the politics of communications surveillance. In addition to regulators and elected officials, court systems have also been involved in adjudicating how, when, and under what conditions DPI can be used to mediate data traffic. Effectively, government institutions have served as the primary arenas in which DPI issues are taken up, though the involved government actors often exhibited their own interests in how issues were to be taken up or resolved. The relative role of these different state bodies in the case studies arguably reflects underlying political cultures: whereas regulators are principally involved in the Canadian situation, elected officials and courts play a significant role in the US, whereas the UK has principally seen DPI debates settled by regulators and elected officials.

Ultimately, while there are important comparative public policy conclusions to the dissertation, such conclusions only paint part of the picture about the politics of deep packet inspection. The final chapter of the dissertation discusses why the concepts of surveillance and privacy are helpful, but ultimately insufficient, to appreciate the democratic significance of deep packet inspection equipment. In response, I suggest that deliberative democratic theory can provide useful normative critiques of DPI-based packet inspection. Moreover, these critiques can result in practical policy proposals that can defray DPI-based practices capable of detrimentally stunting discourse between citizens using the Internet for communications. The chapter concludes with a discussion of how this research can be advanced in the future; while I have sought to clear away some of the murk concerning the technology, my research represents only the first of many steps to reorient Internet policies such that they support, as opposed to threaten, democratic values.

Formal Abstract:

Surveillance on the Internet today extends beyond collecting intelligence at the layer of the Web: major telecommunications companies use technologies to monitor, mediate, and modify data traffic in real time. Such companies functionally represent communicative bottlenecks through which online actions must pass before reaching the global Internet and are thus perfectly positioned to develop rich profiles of their subscribers and modify what they read, do, or say online. And some companies have sought to do just that. A key technology, deep packet inspection (DPI), facilitates such practices.

In the course of evaluating the practices, regulations, and politics that have driven DPI in Canada, the US, and UK it has become evident that the adoption of DPI tends to be dependent on socio-political and economic conditions. Simply put, market or governmental demand is often a prerequisite for the technology’s adoption by ISPs. However, the existence of such demand is no indication of the success of such technologies; regulatory or political advocacy can lead to the restriction or ejection of particular DPI-related practices.

The dissertation proceeds by first outlining how DPI functions and then what has driven its adoption in Canada, the US, and UK. Three conceptual frameworks, path dependency, international governance, and domestic framing, are used to explain whether power structures embedded into technological systems themselves, international standards bodies, or domestic politics are principally responsible for the adoption or resistance to the technology in each nation. After exploring how DPI has arisen as an issue in the respective states I argue that though domestic conditions have principally driven DPI’s adoption, and though the domestic methods of governing DPI and its associated practices have varied across cases, the outcomes of such governance are often quite similar. More broadly, I argue that while the technology and its associated practices constitute surveillance and can infringe upon individuals’ privacy, the debates around DPI must more expansively consider how DPI raises existential risks to deliberative democratic states. I conclude by offering some suggestions on defraying the risks DPI poses to such states.

Download ‘The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?’ (.pdf)

ISP Audits in Canada

Union members call for an independent investigation to ensure safety in Milwaukee County.There are ongoing concerns in Canada about the CRTC’s capacity to gauge and evaluate the quality of Internet service that Canadians receive. This was most recently brought to the fore when the CRTC announced that Canada ranked second to Japan in broadband access speeds. Such a stance is PR spin and, as noted by Peter Nowak, “[o]nly in the halcyon world of the CRTC, where the sky is purple and pigs can fly, could that claim possibly be true.” This head-in-the-sands approach to understanding the Canadian broadband environment, unfortunately, is similarly reflective in the lack of a federal digital strategy and absolutely inadequate funding for even the most basic governmental cyber-security.

To return the CRTC from the halcyon world it is presently stuck within, and establish firm empirical data to guide a digital economic strategy, the Government of Canada should establish a framework to audit ISPs’ infrastructure and network practices. Ideally this would result in an independent body that could examine the quality and speed of broadband throughout Canada. Their methodology and results would be publicly published and could assure all parties – businesses, citizens, and consumers – that they could trust or rely upon ISPs’ infrastructure. Importantly, having an independent body research and publish data concerning Canadian broadband would relieve companies and consumers from having to assume this role, freeing them to use the Internet for productive (rather than watchdog-related) purposes.

Continue reading

Choosing Winners with Deep Packet Inspection

I see a lot of the network neutrality discussion as one surrounding the conditions under which applications can, and cannot, be prevented from running. On one hand there are advocates who maintain that telecommunications providers – ISPs such as Bell, Comcast, and Virgin – shouldn’t be responsible for ‘picking winners and losers’ on the basis that consumers should make these choices. On the other hand, advocates for managed (read: functioning) networks insist that network operators have a duty and responsibility to fairly provision their networks in a way that doesn’t see one small group negatively impact the experiences of the larger consumer population. Deep Packet Inspection (DPI) has become a hot-button technology in light of the neutrality debates, given its potential to let ISPs determine what applications function ‘properly’ and which see their data rates delayed for purposes of network management. What is often missing in the network neutrality discussions is a comparison between the uses of DPI across jurisdictions and how these uses might impact ISPs’ abilities to prioritize or deprioritize particular forms of data traffic.

As part of an early bit of thinking on this, I want to direct our attention to Canada, the United States, and the United Kingdom to start framing how these jurisdictions are approaching the use of DPI. In the process, I will make the claim that Canada’s recent CRTC ruling on the use of the technology appears to be more and more progressive in light of recent decisions in the US and the likelihood of the UK’s Digital Economy Bill (DEB) becoming law. Up front I should note that while I think that Canada can be read as ‘progressive’ on the network neutrality front, this shouldn’t suggest that either the CRTC or parliament have done enough: further clarity into the practices of ISPs, additional insight into the technologies they use, and an ongoing discussion of traffic management systems are needed in Canada. Canadian communications increasingly pass through IP networks and as a result our communications infrastructure should be seen as important as defence, education, and health care, each of which are tied to their own critical infrastructures but connected to one another and enabled through digital communications systems. Digital infrastructures draw together the fibres connecting the Canadian people, Canadian business, and Canadian security, and we need to elevate the discussions about this infrastructure to make it a prominent part of the national agenda.

Continue reading

Three-Strike Copyright

To fully function as a student in today’s Western democracies means having access access to the Internet. In some cases this means students use Content Management Systems (CMSs) such as Drupal, Blackboard, or wikis (to name a few examples) to submit homework and participate in collaborate group assignments. CMSs are great because teachers can monitor the effectiveness of student’s group contributions and retain timestamps of when the student has turned in their work. Thus, when Sally doesn’t turn in her homework for a few weeks, and ‘clearly’ isn’t working with her group in the school-sanctioned CMS, the teacher can call home and talk with Sally’s parents about Sally’s poor performance.

At least, that’s the theory.

Three-Strike Copyright and Some Numbers

I’m not going to spend time talking about the digital divide (save to note that it’s real, and it penalises students in underprivileged environments by preventing them from acting as an equal in the digitized classroom), nor am I going to talk about the inherent privacy and security issues that arise as soon as teacher use digital management systems. No, I want to turn our attention across the Atlantic to Britain, where the British parliament will soon be considering legislation that would implement a three-strike copyright enforcement policy. France is in the process of implementing a similar law (with the expectation that it will be in place by summer 2008), which will turn ISPs into data police. Under these policies if a user (read: household) is caught infringing on copyright three times (they get two warnings) they can lose access to the ‘net following the third infringement.

Continue reading