Relaunch of the SIGINT Summaries

Photo by Brett Sayles on Pexels.com

In 2013, journalists began revealing secrets associated with members of the Five Eyes (FVEY) intelligence alliance. These secrets were disclosed by Edward Snowden, a US intelligence contractor. The journalists who published about the documents did so after carefully assessing their content and removing information that was identified as unduly injurious to national security interests or that threatened to reveal individuals’ identities.

During my tenure at the Citizen Lab I provided expert advice to journalists about the newsworthiness of different documents and, also, when content should be redacted as its release was not in the public interest. In some cases documents that were incredibly interesting were never published on the basis that doing so would be injurious to national security, notwithstanding the potential newsworthiness of the documents in question. As an element of my work, I identified and summarized published documents and covernames which were associated with Canada’s signals intelligence agency, the Communications Security Establishment (CSE).

I am happy to announce a re-launching of the SIGINT summaries but with far more content. Content, today, includes:

In all cases the materials which are summarised on my website have been published, in open-source, by professional news organizations or other publishers. None of the material that I summarise or host is new and none of it has been leaked or provided to me by government or non-government bodies. No current or former intelligence officer has provided me with details about any of the covernames or underlying documents. This said, researchers associated with the Citizen Lab and other academic institutions have, in the past, contributed to some of the materials published on this website.

As a caveat, all descriptions of what the covernames mean or refer to, and what are contained in individual documents leaked by Edward Snowden, are provided on a best-effort basis. Entries will be updated periodically as time is available to analyse further documents or materials.

How Were Documents Summarized?

In assessing any document I have undertaken the following steps:

  1. Re-created my template for all Snowden documents, which includes information about the title, metadata associated with the document (e.g., when it was made public and in what news story, when it was created, which agency created it), and a listing of the covernames listed in the document.
  2. When searching documents for covernames, I moved slowly through the document and, often, zoomed into charts, figures, or other materials in order to decipher both covernames which are prominent in the given document as well as covernames in much smaller fonts. The result of this is that in some cases my analyses of documents have indicated more covernames being present than in other public repositories which have relied on OCR-based methods to extract covernames from texts.
  3. I read carefully through the text of the document, sometimes several times, to try and provide a summary of the highlights in a given document. Note that this is based on my own background and, as such, it is possible that the summaries which are generated may miss items that other readers find notable or interesting. These summaries try and avoid editorialising to the best of my ability.
  4. In a separate file, I have a listing of the given agency’s covernames. Using the listed covernames in the summary, I worked through the document in question to assess what, if anything, was said about a covername and whether what was said is new or expanded my understanding of a covername. Where it did, I added additional sentences to the covername in the listing of the relevant agency’s covernames along with a page reference to source the new information. The intent, here, was to both develop a kind of partial covername decoder and, also, to enable other experts to assess how I have reached conclusions about what covernames mean. This enables them to more easily assess the covername descriptions I have provided.
  5. There is sometimes an editorial process which involved rough third-party copyediting and expert peer review. Both of these, however, have been reliant on external parties having the time and expertise to provide these services. While many of the summaries and covername listings have been copyedited or reviewed, this is not the case for all of them.
  6. Finally, the new entries have been published on this website.

Also, as part of my assessment process I have normalized the names of documents. This has meant I’ve often re-named original documents and, in some cases, split conjoined documents which were published by news organizations into individual documents (e.g., a news organization may have published a series of documents linked to AURORAGOLD as a single .pdf instead of publishing each document or slide deck as its own .pdf). The result is that some of the materials which are published on this website may appear new—it may seem as though there are no other sources on the Internet that appear to host a given document—but, in fact, these are just smaller parts of larger conjoined .pdfs.

Commonly Asked Questions

Why isn’t XXX document included in your list of summarised documents? It’s one of the important ones!

There are a lot of documents to work through and, to some extent, my review of them has been motivated either by specific projects or based on a listing of documents that I have time to assess over the past many years. Documents have not been processed based on when they were published. It can take anywhere from 10 minutes to 5 hours or more to process a given document, and at times I have chosen to focus on documents based on the time available to me or by research projects I have undertaken.

Why haven’t you talked about the legal or ethical dimensions of these documents?

There are any number of venues where I have professionally discussed the activities which have been carried out by, and continue to be carried out by, Western signals intelligence agencies. The purpose of these summaries is to provide a maximally unbiased explanation of what is actually in the documents, instead of injecting my own views of what they describe.

A core problem in discussing the Snowden documents is a blurring of what the documents actually say versus what people think they say, and the appropriateness or legality of what is described in them. This project is an effort to provide a more robust foundation to understand the documents, themselves, and then from there other scholars and experts may have more robust assessments of their content.

Aren’t you endangering national security by publishing this material?

No, I don’t believe that I am. Documents which I summarise and the covernames which I summarise have been public for many, many years. These are, functionally, now historical texts.

Any professional intelligence service worth its salt will have already mined all of these documents and performed an equivalent level of analysis some time ago. Scholars, the public, and other experts however have not had the same resources to similarly analyse and derive value from the documents. In the spirit of open scholarship I am sharing these summaries. I also hope that it is helpful for policymakers so that they can better assess and understand the historical capabilities of some of the most influential and powerful signals intelligence agencies in the world.

Finally, all of the documents, and covernames, which are summarised have been public for a considerable period of time. Programs will have since been further developed or been terminated, and covernames rotated.

What is the narrative across the documents and covernames?

I regard the content published here as a kind of repository that can help the public and researchers undertake their own processes of discovery, based on their own interests. Are you interested in how the FVEY agencies have assessed VPNs, encryption, smartphones, or other topics? Then you could do a search on agencies’ summary lists or covernames to find content of interest. More broadly, however, I think that there is a substantial amount of material which has been synthesised by journalists or academics; these summaries can be helpful to assess their accuracy in discussing the underlying material and, in most cases, the summaries of particular documents link to journalistic reporting that tries to provide a broader narrative to sets of documents.

Why haven’t you made this easier to understand?

I am aware that some of the material is still challenging to read. This was the case for me when I started reading the Snowden documents, and actually led to several revisions of reading/revising summaries as I and colleagues developed a deeper understanding for what the documents were trying to communicate.

To some extent, reading the Snowden documents parallels learning a novel language. As such, it is frustrating to engage with at first but, over time, you can develop an understanding of the structure and grammar of the language. The same is true as you read more of the summaries, underlying documents, and covername descriptions. My intent is that with the material assembled on this website the time to become fluent will be massively reduced.

Future Plans

Over time I hope to continue to add to the summaries, though this will continue as a personal historical project. As such, updates will be made only as I have time available to commit to the work.


  1. As of writing, no reviewed Snowden document explicitly discloses an ASD covername. ↩︎

Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

Earlier this week I published a report, “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19,” alongside co-authors Benjamin Ballard and Amanda Cutinha. The report provides a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies.

At its core, we argue that while there were some events that were truly unprecedented in the pandemic–namely how some consumer surveillance and telecommunications systems were transformed to facilitate pandemic-related surveillance, as well as the prospect of how law reform might alter how personal information could be used in future health emergencies–many of these same events have some historical legacy. The COVID-19 pandemic, however, has revealed a situation where familiar disease management concepts have been supercharged by contemporary networked technologies, and further qualitative shifts could take place if privacy law reform further relax the requirements that organizations must obtain individuals’ consent before handling their personal information.

While we avoid making specific policy prescriptions in this report our message is clear: in the aftermath of COVID-19 it will be critical for policymakers, technologists, and the public writ large to look back at how governments handled the pandemic, and individuals’ personal information, and assess what must be done to better manage future health emergencies while best protecting the civil and human rights of all persons. We hope that our report will contribute, in some small way, to these forthcoming deliberations.


Executive Summary:

Phrases like “[t]he pandemic which has just swept round the earth has been without precedent”1 have been commonly read or heard throughout the COVID-19 pandemic. At the onset of the COVID-19 pandemic, there was a race to restrict mobility, undertake health surveillance to determine the source or cause of local outbreaks, and secure personal protective equipment for healthcare workers and domestic populations. Further and as in past health emergencies, there were efforts to collect and leverage available information to make sense of the spread of the disease, understand the nature of supply chains so as to determine what equipment was available to treat those affected by the disease or provide assistance to those afflicted with it, as well as to understand how the novel coronavirus was transmitted and its effects so as to develop vaccines to mitigate its worst repercussions.

In, “Pandemic Privacy: A preliminary analysis of collection technologies, data collection laws, and legislative reform during COVID-19,” we undertake a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies. In analyzing how data has been collected in the United States, United Kingdom, and Canada, we found that while many of the data collection methods could be mapped onto a trajectory of past collection practices, the breadth and extent of data collection in tandem with how communications networks were repurposed constituted novel technological responses to a health crisis. Similarly, while the intersection of public and private interests in providing healthcare and government services is not new, the ability for private companies such as Google and Apple to forcefully shape some of the technology-enabled pandemic responses speaks to the significant ability of private companies to guide or direct public health measures that rely on contemporary smartphone technologies. While we found that the uses of technologies were linked to historical efforts to combat the spread of disease, the nature and extent of private surveillance to enable public action was arguably unprecedented.

Turning from the technologies involved to collect data, we shift to an analysis of how Canadian law enabled governmental collections, uses, and disclosures of personal information and how legislation that was in force before the outbreak of COVID-19 empowered governments to overcome any legal hurdles that might have prevented state agencies from using data to address COVID-19 in Canada. Despite possessing this lawful authority, however, governments of Canada were often accused of inadequately responding to the pandemic, and they, in turn, sometimes suggested or indicated that privacy legislation impaired their abilities to act. These concerns have precedent insofar as they were raised following the 2003 SARS pandemic, but they were then–as now–found to be meritless: privacy legislation has not been an impediment to data collection, use, or sharing, despite claims to the contrary. The challenges faced by governments across Canada were, in fact, precedented and linked to poor governmental policies and capabilities to collect, use, and share data just as in past health crises. 

Perhaps partially in response to perceptions that privacy rights afforded to Canadians impeded the pandemic response, the federal government of Canada introduced legislation in August 2020 (which ultimately did not get passed into law due to an election) that would both have reified existing exemptions to privacy protections while empowering private companies to collect, use, and disclose personal information for further ‘socially beneficial practices’ without first obtaining individuals’ consent. While it is hardly unprecedented for governments to draft and introduce privacy legislation that would expand how personal information might be used, the exclusion of human rights to balance commercial uses of personal information stands as a novel decision where such legislation is now regularly linked with explicit human rights protections. 

This report proceeds as follows. After a short introduction in Section one, we present the methodologies we used in Section two. Section three turns to how contemporary digital technologies were used to collect data in the United States, United Kingdom, and Canada. Our principal finding is that collection efforts were constrained by the ways in which private companies chose to enable data collection, particularly in the case of contact tracing and exposure notifications, and by how these companies choose to share data that was under their control and how data was repurposed for assisting in containing COVID-19. The breadth and extent of data collection was unprecedented when compared to past health crises.

In Section four, we focus on Canadian legal concerns regarding the extent to which privacy and civil liberties protections affected how the federal and provincial governments handled data in their responses to the COVID-19 pandemic. We find that privacy legislation did not establish any notable legal barriers for collecting, sharing, and using personal information given the permissibility of such activities in health emergencies, as these actions are laid out in provincial health and emergencies laws. More broadly, however, the legislative standard that allows for derogations from consent in emergency situations may be incompatible with individuals’ perceptions of their privacy rights and what they consider to be ‘appropriate’ infringements of these rights, especially when some individuals contest the gravity (or even existence) of the COVID-19 pandemic in the first place.

Section five turns to how next-generation privacy legislation, such as the Consumer Privacy Protection Act (CPPA), might raise the prospect of significant changes in how data could be collected, used, or disclosed in future health crises. The CPPA did not enter into law as a result of a Canadian federal election, which killed the bill on the Order Paper. Nonetheless, we find that a law such as the CPPA could facilitate unprecedented non-consensual handling of personal information.

Section six presents a discussion of the broader themes that cut across the report. These include how the pandemic further reveals the redistribution of power between states and private organizations, the need for novel digital epidemiological processes to have strong bioethics and equitable commitments for those involved in digital epidemiological experiments, and the need to assess the roles of consent in future health emergencies, especially when new legislative frameworks might permit more permissive and non-consensual data collection, use, and disclosure for health-related purposes. Section seven presents a short conclusion to our report.

Footnotes

1. Goerge A. Soper. (1919). “The Lessons of the Pandemic,” Science 49(1274).


Download the full report: “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity

black android smartphone on top of white book
Photo by Pixabay on Pexels.com

On May 12, 2021, President Joseph Biden promulgated an Executive Order (EO) to compel federal agencies to modify and enhance their cybersecurity practices. In this brief post I note a handful of elements of the EO that are noteworthy for the United States and, also, more broadly can be used to inform, assess, and evaluate non-American cybersecurity practices.

The core takeaway, for me, is that the United States government is drawing from its higher level strategies to form a clear and distinct set of policies that are linked to measurable goals. The Biden EO is significant in its scope though it remains unclear whether it will actually lead to government agencies better mitigating the threats which are facing their computer networks and systems.

Continue reading

Update to the SIGINT Summaries

As part of my ongoing research into the Edward Snowden documents, I have found and added an additional two documents to the Canadian SIGINT Summaries. The Summaries include downloadable copies of leaked Communications Security Establishment (CSE) documents, along with summary, publication, and original source information. CSE is Canada’s foreign signals intelligence agency and has operated since the Second World War.

Documents were often produced by CSE’s closest partners which, collectively, form the ‘Five Eyes’ intelligence network. This network includes the CSE, the National Security Agency (NSA), the Government Communications Headquarters (GCHQ), Australian Signals Directorate (ASD), and Government Communications Security Bureau (GCSB).

All of the documents are available for download from this website. Though I am hosting the documents they were all first published by another party. The new documents and their summaries are listed below. The full list of documents and their summary information is available on the Canadian SIGINT Summaries page.

These documents came to light as I examined the activities that took place between the NSA and New Zealand signals intelligence agencies. The first, “NSA Intelligence Relationship with New Zealand” notes that Canada is a member of the SIGINT Seniors Pacific group as well as SIGINT Seniors Europe. The second, “SIGINT Development Forum (SDF) Minutes”, notes how CSE and GCSB define shaping as “industry engagement and collection bending” as well as CSEC had considered audit analysts’ accounts similar to the NSA, though the prospect of such auditing had rearisen as a discussion point.

NSA Intelligence Relationship with New Zealand

Summary: This document summarizes the status of the NSA’s relationship with New Zealand Government Communications Security Bureau (GCSB). The GCSB has been forced to expend more of its resources on compliance auditing following recommendations after it exceeded its authority in assisting domestic law enforcement, but continues to be focused on government and five eyes priorities and encouraged to pursue technical interoperability with NSA and other FVEY nations.

The NSA provides GCSB with “raw traffic, processing, and reporting on targets of mutual interest, in addition to technical advice and equipment loans.” The GCSB primarily provides the NSA with access to communications which would otherwise remain inaccessible. These communications include: China, Japanese/North Korean/Vietnamese/South American diplomatic communications, South Pacific Island nations, Pakistan, India, Iran, and Antartica, as well as French police and nuclear testing activities in New Caledonia.

Of note, GCSB is a member of SIGINT Seniors Pacific (SSPAC) (includes Australia, Canada, France, India, Korea, New Zealand, Singapore, Thailand, United Kingdom, and United States) as well as SIGINT Seniors Europe (SSEUR) (includes Australia, Belgium, Canada, Denmark, France, Germany, Italy, Netherlands, New Zealand, Norway, Spain, Sweden, United Kingdom, and United States).

Document Published: March 11, 2015
Document Dated: April 2013
Document Length: 3 pages
Associated Article: Snowden revelations: NZ’s spy reach stretches across globe
Download Document: NSA Intelligence Relationship with New Zealand
Classification: TOP SECRET//SI//REL TO USA, FVEY
Authoring Agency: NSA
Codenames: None

SIGINT Development Forum (SDF) Minutes

Summary: This document summarizes the state of signals development amongst the Five Eyes (FVEY). It first outline the core imperatives for the group, including: ensuring that the top technologies are being identified for use and linked with the capability they bring; that NSA shaping (targeting routers) improves (while noting that for CSE and GCSB shaping involves “industry engagement and collection bending”); improving on pattern of life collection and analysis; improving on IP address geolocation that covers Internet, radio frequency, and GSM realms; analyzing how convergence of communications systems and technologies impacts SIGINT operations.

Privacy issues were seen as being on the groups’ radar, on the basis that the “Oversight & Compliance team at NSA was under-resourced and overburdened.” Neither GCSB or DSD were able to sponsor or audit analysts’ accounts similar to the NSA, and CSEC indicated it had considered funding audit billets; while dismissed at the time, the prospect has re-arisen. At the time the non-NSA FVEYs were considering how to implement ‘super-user’ accounts, where specific staff will run queries for counterparts who are not directly authorized to run queries on selective databases.

GCSB, in particular, was developing its first network analyst team in October 2009 and was meant to prove the utility of network analysis so as to get additional staff for later supporting STATEROOM and Computer Network Exploitation tasks. Further, GCSB was to continue its work in the South Pacific region, as well as expanding cable access efforts and capabilities during a 1 month push.  There was also a problem where 20% of GCSB’s analytic workforce lacked access to DSD’s XKEYSCORE, which was a problem given that GCSB provided NSA with raw data. The reason for needing external tools to access data is GCSB staff are prohibited from accessing New Zealand data.

Document Published: March 11, 2015
Document Dated: June 8-9, 2009
Document Length: 3 pages
Associated Article: Snowden revelations: NZ’s spy reach stretches across globe
Download Document: SIGINT Development Forum (SDF) Minutes
Classification: TOP SECRET//COMINT//REL TO USA, AUS, CAN, GBR, NZL
Authoring Agency: NSA
Codenames: STATEROOM, XKEYSCORE

‘Defending the Core’ of the Network: Canadian vs. American Approaches

U.S. Cyber Command recently conducted on Fort Meade its first exercise in collaboration with cyber subject-matter experts from across the National Security Agency, National Guard, Department of Homeland Security and FBI.In our recent report, “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” we discussed how the Communications Security Establishment (CSE) developed and deployed a sensor network within domestic and foreign telecommunications networks. While our report highlighted some of the concerns linked to this EONBLUE sensor network, including the dangers of secretly extending government surveillance capacity without any public debate about the extensions, as well as how EONBLUE or other CSE programs programs collect information about Canadians’ communications, we did not engage in a comparison of how Canada and its closest allies monitor domestic network traffic. This post briefly describes the EONBLUE sensor program, what may be equivalent domestic programs in the United States, and the questions that emerge from contrasting what we know about the Canadian and American sensor networks.

What is EONBLUE?

EONBLUE was developed and deployed by the CSE. The CSE is Canada’s premier signals intelligence agency. The EONBLUE sensor network “is a passive SIGINT system that was used to collect ‘full-take’ data, as well as conduct signature and anomaly based detections on network traffic.” Prior Snowden documents showcased plans to integrate EONBLUE into government networks; the network has already been integrated into private companies’  networks. Figure one outlines the different ‘shades of blue’, or variations of the EONBLUE sensors:

EONBLUE Sensors

Continue reading

The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?

UVic CrestToday, I am happy to make my completed doctoral dissertation available to the public. The dissertation examines what drives, and hinders, wireline network practices that are enabled by Deep Packet Inspection (DPI) routers. Such routers are in wide use by Internet service providers (ISPs) in Canada, the United States, and United Kingdom, and offer the theoretical capacity for service providers to intrusively monitor, mediate, and modify their subscribers’ data packets in real or near-real time. Given the potential uses of the routers, I was specifically interested in how the politics of deep packet inspection intersected with the following issues: network management practices, content control and copyright, advertising, and national security/policing.

Based on the potential capabilities of deep packet inspection technologies – and the warnings that such technologies could herald the ‘end of the Internet’ as it is know by citizens of the West – I explored what has actually driven the uptake of the technology in Canada, the US, and the UK. I ultimately found that though there were variations in different states’ regulatory processes, regulators tended to arrive at common conclusions. Regulatory convergence stands in opposition to the divergence that arose as elected officials entered into the DPI debates: such officials have been guided by domestic politics, and tended to reach significantly different conclusions. In effect, while high-expertise regulatory networks reached common conclusions, elected political officials have demonstrated varying degrees of technical expertise and instead have focused on the politics of communications surveillance. In addition to regulators and elected officials, court systems have also been involved in adjudicating how, when, and under what conditions DPI can be used to mediate data traffic. Effectively, government institutions have served as the primary arenas in which DPI issues are taken up, though the involved government actors often exhibited their own interests in how issues were to be taken up or resolved. The relative role of these different state bodies in the case studies arguably reflects underlying political cultures: whereas regulators are principally involved in the Canadian situation, elected officials and courts play a significant role in the US, whereas the UK has principally seen DPI debates settled by regulators and elected officials.

Ultimately, while there are important comparative public policy conclusions to the dissertation, such conclusions only paint part of the picture about the politics of deep packet inspection. The final chapter of the dissertation discusses why the concepts of surveillance and privacy are helpful, but ultimately insufficient, to appreciate the democratic significance of deep packet inspection equipment. In response, I suggest that deliberative democratic theory can provide useful normative critiques of DPI-based packet inspection. Moreover, these critiques can result in practical policy proposals that can defray DPI-based practices capable of detrimentally stunting discourse between citizens using the Internet for communications. The chapter concludes with a discussion of how this research can be advanced in the future; while I have sought to clear away some of the murk concerning the technology, my research represents only the first of many steps to reorient Internet policies such that they support, as opposed to threaten, democratic values.

Formal Abstract:

Surveillance on the Internet today extends beyond collecting intelligence at the layer of the Web: major telecommunications companies use technologies to monitor, mediate, and modify data traffic in real time. Such companies functionally represent communicative bottlenecks through which online actions must pass before reaching the global Internet and are thus perfectly positioned to develop rich profiles of their subscribers and modify what they read, do, or say online. And some companies have sought to do just that. A key technology, deep packet inspection (DPI), facilitates such practices.

In the course of evaluating the practices, regulations, and politics that have driven DPI in Canada, the US, and UK it has become evident that the adoption of DPI tends to be dependent on socio-political and economic conditions. Simply put, market or governmental demand is often a prerequisite for the technology’s adoption by ISPs. However, the existence of such demand is no indication of the success of such technologies; regulatory or political advocacy can lead to the restriction or ejection of particular DPI-related practices.

The dissertation proceeds by first outlining how DPI functions and then what has driven its adoption in Canada, the US, and UK. Three conceptual frameworks, path dependency, international governance, and domestic framing, are used to explain whether power structures embedded into technological systems themselves, international standards bodies, or domestic politics are principally responsible for the adoption or resistance to the technology in each nation. After exploring how DPI has arisen as an issue in the respective states I argue that though domestic conditions have principally driven DPI’s adoption, and though the domestic methods of governing DPI and its associated practices have varied across cases, the outcomes of such governance are often quite similar. More broadly, I argue that while the technology and its associated practices constitute surveillance and can infringe upon individuals’ privacy, the debates around DPI must more expansively consider how DPI raises existential risks to deliberative democratic states. I conclude by offering some suggestions on defraying the risks DPI poses to such states.

Download ‘The Politics of Deep Packet Inspection: What Drives Surveillance by Internet Service Providers?’ (.pdf)