Touring the digital through type

Tag: Canada (Page 1 of 23)

Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

Earlier this week I published a report, “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19,” alongside co-authors Benjamin Ballard and Amanda Cutinha. The report provides a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies.

At its core, we argue that while there were some events that were truly unprecedented in the pandemic–namely how some consumer surveillance and telecommunications systems were transformed to facilitate pandemic-related surveillance, as well as the prospect of how law reform might alter how personal information could be used in future health emergencies–many of these same events have some historical legacy. The COVID-19 pandemic, however, has revealed a situation where familiar disease management concepts have been supercharged by contemporary networked technologies, and further qualitative shifts could take place if privacy law reform further relax the requirements that organizations must obtain individuals’ consent before handling their personal information.

While we avoid making specific policy prescriptions in this report our message is clear: in the aftermath of COVID-19 it will be critical for policymakers, technologists, and the public writ large to look back at how governments handled the pandemic, and individuals’ personal information, and assess what must be done to better manage future health emergencies while best protecting the civil and human rights of all persons. We hope that our report will contribute, in some small way, to these forthcoming deliberations.


Executive Summary:

Phrases like “[t]he pandemic which has just swept round the earth has been without precedent”1 have been commonly read or heard throughout the COVID-19 pandemic. At the onset of the COVID-19 pandemic, there was a race to restrict mobility, undertake health surveillance to determine the source or cause of local outbreaks, and secure personal protective equipment for healthcare workers and domestic populations. Further and as in past health emergencies, there were efforts to collect and leverage available information to make sense of the spread of the disease, understand the nature of supply chains so as to determine what equipment was available to treat those affected by the disease or provide assistance to those afflicted with it, as well as to understand how the novel coronavirus was transmitted and its effects so as to develop vaccines to mitigate its worst repercussions.

In, “Pandemic Privacy: A preliminary analysis of collection technologies, data collection laws, and legislative reform during COVID-19,” we undertake a preliminary comparative analysis of how different information technologies were mobilized in response to COVID-19 to collect data, the extent to which Canadian health or privacy or emergencies laws impeded the response to COVID-19, and ultimately, the potential consequences of reforming data protection or privacy laws to enable more expansive data collection, use, or disclosure of personal information in future health emergencies. In analyzing how data has been collected in the United States, United Kingdom, and Canada, we found that while many of the data collection methods could be mapped onto a trajectory of past collection practices, the breadth and extent of data collection in tandem with how communications networks were repurposed constituted novel technological responses to a health crisis. Similarly, while the intersection of public and private interests in providing healthcare and government services is not new, the ability for private companies such as Google and Apple to forcefully shape some of the technology-enabled pandemic responses speaks to the significant ability of private companies to guide or direct public health measures that rely on contemporary smartphone technologies. While we found that the uses of technologies were linked to historical efforts to combat the spread of disease, the nature and extent of private surveillance to enable public action was arguably unprecedented.

Turning from the technologies involved to collect data, we shift to an analysis of how Canadian law enabled governmental collections, uses, and disclosures of personal information and how legislation that was in force before the outbreak of COVID-19 empowered governments to overcome any legal hurdles that might have prevented state agencies from using data to address COVID-19 in Canada. Despite possessing this lawful authority, however, governments of Canada were often accused of inadequately responding to the pandemic, and they, in turn, sometimes suggested or indicated that privacy legislation impaired their abilities to act. These concerns have precedent insofar as they were raised following the 2003 SARS pandemic, but they were then–as now–found to be meritless: privacy legislation has not been an impediment to data collection, use, or sharing, despite claims to the contrary. The challenges faced by governments across Canada were, in fact, precedented and linked to poor governmental policies and capabilities to collect, use, and share data just as in past health crises. 

Perhaps partially in response to perceptions that privacy rights afforded to Canadians impeded the pandemic response, the federal government of Canada introduced legislation in August 2020 (which ultimately did not get passed into law due to an election) that would both have reified existing exemptions to privacy protections while empowering private companies to collect, use, and disclose personal information for further ‘socially beneficial practices’ without first obtaining individuals’ consent. While it is hardly unprecedented for governments to draft and introduce privacy legislation that would expand how personal information might be used, the exclusion of human rights to balance commercial uses of personal information stands as a novel decision where such legislation is now regularly linked with explicit human rights protections. 

This report proceeds as follows. After a short introduction in Section one, we present the methodologies we used in Section two. Section three turns to how contemporary digital technologies were used to collect data in the United States, United Kingdom, and Canada. Our principal finding is that collection efforts were constrained by the ways in which private companies chose to enable data collection, particularly in the case of contact tracing and exposure notifications, and by how these companies choose to share data that was under their control and how data was repurposed for assisting in containing COVID-19. The breadth and extent of data collection was unprecedented when compared to past health crises.

In Section four, we focus on Canadian legal concerns regarding the extent to which privacy and civil liberties protections affected how the federal and provincial governments handled data in their responses to the COVID-19 pandemic. We find that privacy legislation did not establish any notable legal barriers for collecting, sharing, and using personal information given the permissibility of such activities in health emergencies, as these actions are laid out in provincial health and emergencies laws. More broadly, however, the legislative standard that allows for derogations from consent in emergency situations may be incompatible with individuals’ perceptions of their privacy rights and what they consider to be ‘appropriate’ infringements of these rights, especially when some individuals contest the gravity (or even existence) of the COVID-19 pandemic in the first place.

Section five turns to how next-generation privacy legislation, such as the Consumer Privacy Protection Act (CPPA), might raise the prospect of significant changes in how data could be collected, used, or disclosed in future health crises. The CPPA did not enter into law as a result of a Canadian federal election, which killed the bill on the Order Paper. Nonetheless, we find that a law such as the CPPA could facilitate unprecedented non-consensual handling of personal information.

Section six presents a discussion of the broader themes that cut across the report. These include how the pandemic further reveals the redistribution of power between states and private organizations, the need for novel digital epidemiological processes to have strong bioethics and equitable commitments for those involved in digital epidemiological experiments, and the need to assess the roles of consent in future health emergencies, especially when new legislative frameworks might permit more permissive and non-consensual data collection, use, and disclosure for health-related purposes. Section seven presents a short conclusion to our report.

Footnotes

1. Goerge A. Soper. (1919). “The Lessons of the Pandemic,” Science 49(1274).


Download the full report: “Pandemic Privacy: A Preliminary Analysis of Collection Technologies, Data Collection Laws, and Legislative Reform during COVID-19

Answers and Further Analysis Concerning NSIRA’s 2021 Cyber Incident

questions answers signage
Photo by Pixabay on Pexels.com

The National Security Intelligence Review Agency (NSIRA) is responsible for conducting national security reviews of Canadian federal agencies. On April 16, 2021, the Agency announced that it had suffered a ‘cyber incident’. An unauthorized party had accessed the Agency’s unclassified external network as part of that incident. The affected network did not contain Secret, Top Secret, or Top Secret SI information. In August 2021, NSIRA posted an update with additional details about the cyber incident that it had experienced.

I raised a number of questions about the nature of the Agency’s incident, and its implications, in a post I published earlier in 2021. In this post, I provide an update as well as some further analysis of the incident based on the information that NSIRA revealed in August 2021.

I begin by outlining the additional details that NSIRA has provided about the incident and juxtapose that information with what has been provided by the Canadian Centre for Cyber Security (CCCS) about the Microsoft Exchange vulnerability that led to NSIRA’s incident. I note that NSIRA (or the team(s) responsible for securing its networks) seems to have failed to either patch NSIRA’s on-premises Exchange server when the vulnerability was first announced, or they were unable to successfully implement mitigation measures intended to prevent the exploitation of the server. The result was employee information was obtained by an unauthorized party.

Next, I note the extent to which NSIRA’s update responds to the initial questions I raised when writing about this incident in April 2021. On the whole, most of the questions I raised have been answered to at least some extent.

I conclude by discussing the significance of the information that was exfiltrated from NSIRA, the likelihood that a nation-state actor either conducted the operation or now has access to the exfiltrated data, what this incident may suggest for NSIRA’s IT security, and finally raise questions about NSIRA’s decommissioning of its Protected networks.

Continue reading

Canadian National Security Assessment Rules Endanger Scholarly Research

laboratory equipment on table
Photo by Karolina Grabowska on Pexels.com

On July 14, 2021 I published an opinion article in the Globe & Mail, entitled there as, “The new security research rules threaten universities’ ability to be open and inclusive“. The article is republished, in full, below.


On Monday, the Canadian government imposed mandatory national security risk assessments on scholarly research. The new rules apply to projects that receive funding from the Natural Sciences and Engineering Research Council (NSERC) and involve foreign researchers or private-sector organizations. The stated intent of the assessments is to prevent intellectual property from being stolen and ensure that Canadian researchers do not share industrial, military or intelligence secrets with foreign governments or organizations to the detriment of Canadian interests. But they will chill research and scholarly training, accentuate anti-immigrant biases and may amplify national security problems.

In brief, these assessments add an analysis of national security issues into the process of funding partnerships by compelling researchers to evaluate whether their work is “sensitive.” Cutting-edge topics that are considered sensitive include artificial intelligence, biotechnology, medical technology, quantum science, robotics, autonomous systems and space technology. Amongst other criteria, researchers must also assess risks posed by partners, including whether they might disclose information to other groups that could negatively affect Canada’s national security, whether they could be subject to influence from foreign governments or militaries, or if they lack clear explanations for how or why they can supplement funding from NSERC.

If a researcher or their team cannot state there are no risks, they must itemize prospective risks, even in cases where they must speculate. Mitigation processes must explain what security protocols will be established, how information might be restricted on a need-to-know basis, or how collaborators will be vetted. Government documents specifically warn researchers to take care when working with members of the university research community, such as contractors, employees or students.

Whenever research is assessed as raising national security concerns, it may be reviewed by NSERC and Canada’s national security agencies, and research programs may need to be modified or partners abandoned before funding will be released.

These assessments will chill Canadian research. Consider Canadian university professors who are working on artificial intelligence research, but who hold Chinese citizenship and thus could potentially be subject to compulsion under China’s national security legislation. Under the assessment criteria, it would seem that such researchers are now to be regarded as inherently riskier than colleagues who pursue similar topics, but who hold Canadian, American or European citizenship. The assessments will almost certainly reify biases against some Canadian researchers on the basis of their nationality, something that has become commonplace in the United States as Chinese researchers have increasingly been the focus of U.S. security investigations.

Students who could potentially be directly or indirectly compelled by their national governments may now be deemed a threat to Canada’s national security and interests. Consequently, international students or those who have families outside of Canada might be kept from fully participating on professors’ research projects out of national security concerns and lose out on important training opportunities. This stigma may encourage international students to obtain their education outside of Canada.

These assessments may create more problems than they solve. Some Canadian researchers with foreign citizenships might apply for foreign funding to avoid national security assessments altogether. But they may also be motivated to conceal this fact for fear of the suspicion that might otherwise accompany the funding, especially based on how their American counterparts have been targeted in FBI-led investigations. Foreign intelligence services look for individuals who have something to hide to exploit such vulnerabilities. In effect, these assessments may amplify the prospect that researchers will be targeted for recruitment by foreign spy agencies and exacerbate fears of foreign espionage and illicit acquisition of intellectual property.

What must be done? If the government insists on applying these assessments, then NSERC must commit to publishing annual reports explaining how regularly research is assessed, the nature of the assessed research, rationales for assessments and the outcomes. Canada’s national security review agencies will also have to review NSERC’s assessments to ensure that the results are based in fact, not suspicion or bias. Researchers can and should complain to the review agencies and the news media if they believe that any assessment is inappropriate.

Ultimately, Canadian university leaders must strongly oppose these assessments as they are currently written. The chill of national security threatens to deepen suspicions towards some of our world-leading researchers and exceptional international students, and those running universities must publicly stand up for their communities. Their universities’ status as being open and inclusive – and being independent, world-leading research bodies – depends on their advocacy.

Questions Surrounding NSIRA’s ‘Cyber Incident’

wood dirty writing abstract
Photo by alleksana on Pexels.com

On April 16, 2021 the National Security Intelligence Review Agency (NSIRA) published a statement on their website that declared they had experienced a ‘cyber incident’ that involved an unauthorized party accessing the Agency’s external network. This network was not used for Secret or Top Secret information. 

NSIRA is responsible for conducting national security reviews of Canadian federal agencies, inclusive of “the Canadian Security Intelligence Service (CSIS) and the Communications Security Establishment (CSE), as well as the national security and intelligence activities of all other federal departments and agencies.” The expanded list of departments and agencies includes the Royal Canadian Mounted Police (RCMP), the Canada Border Services Agency (CBSA), the Department of National Defence (DND), Global Affairs Canada (GAC), and the Department of Justice (DoJ). As a result of their expansive mandate, the Agency has access to broad swathes of information about the activities which are undertaken by Canada’s national security and intelligence community. 

Despite the potential significance of this breach, little has been publicly written about the possible implications of the unauthorized access. This post acts as an early round of analysis of the potential significance of the access by, first, outlining the kinds of information which may have been accessed by the unauthorized party and, then, raising a series of questions that remain unanswered in NSIRA’s statement. The answers to these questions may dictate the actual seriousness and severity of the cyber-incident.

What is Protected Information?

NSIRA’s unclassified information includes Protected information. Information is classified as Protected when, if compromised, it “could reasonably be expected to cause injury to a non-national interest—that is, an individual interest such as a person or an organization.” There are three classes of protected information that are applied based on the sensitivity of the information. Protected A could, if compromised, “cause injury to an individual, organization or government,” whereas compromising Protect B information could “cause serious injury.” Compromising Protected C information could “cause extremely grave injury”. Protected C information is safeguarded in the same manner as Confidential or Secret material which, respectively, could cause injury or could cause serious injury to “the national interest, defence and maintenance of the social, political, and economic wellbeing of Canada” in the case of either being compromised.

Intrusion into protected networks brings with it potentially significant concerns based on the information which may be obtained. Per Veterans Affairs, employee information associated with Protected A information could include ‘tombstone’ information such as name, home address, telephone numbers or date of birth, personal record identifiers, language test results, or views which if made public would cause embarrassment to the individual or organization. Protected B could include medical records (e.g., physical, psychiatric, or psychological descriptions), performance reviews, tax returns, an individual’s financial information, character assessments, or other files or information that are composed of a significant amount of personal information. 

More broadly, Protected A information can include third-party business information that has been provided in confidence, contracts, or tenders. Protected B information in excess of staff information might include that which, if disclosed, could cause a loss of competitive advantage to a Canadian company or could impede the development of government policies such as by revealing Treasury Board submissions. 

In short, information classified as Protected could be manipulated for a number of ends depending on the specifics of what information is in a computer network. Theoretically, and assuming that an expansive amount of protected information were present, the information might be used by third-parties to attempt to recruit or target government staff or could give insights into activities that NSIRA was interested in reviewing, or is actively reviewing. Further, were NSIRA either reviewing non-classified government policies or preparing such policies for the Treasury Board, the revelation of such information might advantage unauthorized parties by enabling them to predict or respond to those policies in advance of their being put in place.

Continue reading
« Older posts