Questions Surrounding NSIRA’s ‘Cyber Incident’

wood dirty writing abstract
Photo by alleksana on Pexels.com

On April 16, 2021 the National Security Intelligence Review Agency (NSIRA) published a statement on their website that declared they had experienced a ‘cyber incident’ that involved an unauthorized party accessing the Agency’s external network. This network was not used for Secret or Top Secret information. 

NSIRA is responsible for conducting national security reviews of Canadian federal agencies, inclusive of “the Canadian Security Intelligence Service (CSIS) and the Communications Security Establishment (CSE), as well as the national security and intelligence activities of all other federal departments and agencies.” The expanded list of departments and agencies includes the Royal Canadian Mounted Police (RCMP), the Canada Border Services Agency (CBSA), the Department of National Defence (DND), Global Affairs Canada (GAC), and the Department of Justice (DoJ). As a result of their expansive mandate, the Agency has access to broad swathes of information about the activities which are undertaken by Canada’s national security and intelligence community. 

Despite the potential significance of this breach, little has been publicly written about the possible implications of the unauthorized access. This post acts as an early round of analysis of the potential significance of the access by, first, outlining the kinds of information which may have been accessed by the unauthorized party and, then, raising a series of questions that remain unanswered in NSIRA’s statement. The answers to these questions may dictate the actual seriousness and severity of the cyber-incident.

What is Protected Information?

NSIRA’s unclassified information includes Protected information. Information is classified as Protected when, if compromised, it “could reasonably be expected to cause injury to a non-national interest—that is, an individual interest such as a person or an organization.” There are three classes of protected information that are applied based on the sensitivity of the information. Protected A could, if compromised, “cause injury to an individual, organization or government,” whereas compromising Protect B information could “cause serious injury.” Compromising Protected C information could “cause extremely grave injury”. Protected C information is safeguarded in the same manner as Confidential or Secret material which, respectively, could cause injury or could cause serious injury to “the national interest, defence and maintenance of the social, political, and economic wellbeing of Canada” in the case of either being compromised.

Intrusion into protected networks brings with it potentially significant concerns based on the information which may be obtained. Per Veterans Affairs, employee information associated with Protected A information could include ‘tombstone’ information such as name, home address, telephone numbers or date of birth, personal record identifiers, language test results, or views which if made public would cause embarrassment to the individual or organization. Protected B could include medical records (e.g., physical, psychiatric, or psychological descriptions), performance reviews, tax returns, an individual’s financial information, character assessments, or other files or information that are composed of a significant amount of personal information. 

More broadly, Protected A information can include third-party business information that has been provided in confidence, contracts, or tenders. Protected B information in excess of staff information might include that which, if disclosed, could cause a loss of competitive advantage to a Canadian company or could impede the development of government policies such as by revealing Treasury Board submissions. 

In short, information classified as Protected could be manipulated for a number of ends depending on the specifics of what information is in a computer network. Theoretically, and assuming that an expansive amount of protected information were present, the information might be used by third-parties to attempt to recruit or target government staff or could give insights into activities that NSIRA was interested in reviewing, or is actively reviewing. Further, were NSIRA either reviewing non-classified government policies or preparing such policies for the Treasury Board, the revelation of such information might advantage unauthorized parties by enabling them to predict or respond to those policies in advance of their being put in place.

Continue reading

Equity, inclusion and Canada’s COVID Alert app

Photo by Anton Uniqueton on Pexels.com

The governments of Canada and Ontario announced the release of their COVID Alert exposure notification app on July 31. The application has been developed with privacy protection in mind, and has undergone governmental and private-sector reviews of its security and privacy. It has received high praise from many notable members of Canada’s privacy community, many of whom—myself included—have installed the application.

Despite this, the app still raises concerns of a non-technical nature – particularly when it comes to equity and inclusion.

COVID Alert App 101

COVID Alert can currently be used by residents of Ontario to receive exposure notifications. Canadian residents outside of Ontario can download the app but it won’t gain full functionality until their provincial heath authority joins the project. The application uses the exposure notification framework that was created by Google and Apple, and integrated into the companies’ respective operating systems.

COVID Alert does not collect:

  • Your name or address;
  • Your phone’s contacts;
  • Your health information;
  • The health information of people around you; or
  • Your location.

A smartphone with the app installed will generate random codes every five minutes and transmit them using Bluetooth to any phone within two metres that also has the app installed. Your smartphone will retain a log of all the codes that have been received for 14 days; information is deleted after that period. If the code of a person who has tested positive for COVID-19, and has uploaded their status to a government server, is found to be proximate to your device for 15 minutes or more, your device will notify you. At no point does the app collect any person’s name or the places they have visited; if you receive an exposure notification, neither the app nor the government can tell you who tested positive for COVID-19 or where you were potentially exposed to the disease. (For a more far more detailed overview of how Apple and Google’s exposure notification framework operates, see Hussein Nasser’s explainer video.)

The server will normally retain data for three months when devices contact the server, or up to two years if suspicious activity is identified. Access to these logs are highly restricted to authorized users who are bound to security obligations to protect, and not misuse, the data.

In addition to strong technical safeguards associated with the Apple-Google framework, the federal and Ontario privacy commissioners conducted their own privacy reviews of the app. The app’s developers spent a significant amount of time ensuring it was maximally accessible to Canadianswho may have visual, auditory or other physical impairments. Both the Canadian Centre for Cyber Security and BlackBerry Security have assessed the application’s security, and a formal vulnerability disclosure process for the application has been created. Finally, the Canadian government has established an Advisory Council composed of members of industry, academia and civil society, and is developing a framework to define and evaluate the app’s effectiveness, which will include an audit by the Office of the Privacy Commissioner of Canada and Health Canada later this year. If the app if found to be ineffective it will be decommissioned.

Considering all of this, the Canadian government and its provincial partner are to be congratulated on learning from many of the lessons of their international peers by collecting a minimum amount of data, developing a secure app and subjecting themselves and the Covid Alert app to substantial accountability checks.

Access and Equity Issues Remain

As I wrote at the onset of the pandemic, any COVID-19 apps must be developed with social inclusivity in mind. Technologies are inherently political in nature and their design, in part, defines what is and isn’t normal behaviour, what its use cases are, and what social norms govern its use. Inclusive policy design should accompany technologies that are intended to be used throughout society; at minimum, policy-makers should ask: Who is this technology designed for? What is this technology specifically intended to do or change in society? Who is included or excluded from using this technology? And, how might this technology detrimentally affect some members of society? It is this set of questions that brings some of the limitations of the COVID Alert app to the fore.

The COVID Alert application is designed for Canadians who own sufficiently recent smartphones; this means that people lacking such smartphones are excluded from using the app. A June 2020 study from Ryerson University’s Cybersecure Policy Exchange showed that 26 per cent of households earning less than $20,000, and about the same percentage of people over 60 years old, lack a smartphone. Similarly, people who identify as Black, Indigenous and people of colour tend to be less affluent and, as such, are less likely to own smartphones capable of installing the application. All of the aforementioned groups — the less economically advantaged, the elderly and racialized communities — have tended to disproportionately suffer the effects of COVID-19.

The COVID Alert app is designed to achieve positive social goods — to mitigate the spread of disease — but there are live questions about an app’s ability to accomplish this goal. A team from Oxford University developed a model in April 2020 that found that approximately 60 per cent of the U.K.’s general population would need to install an app for it to be fully effective; this measures out to approximately 80 per cent of all smartphone users in that country. A lower adoption rate may still potentially help to inhibit the spread of COVID-19, but at less dramatic rates.

Beyond questions of the actual efficacy of any given app, there are also potential unintended consequences that might disproportionately affect those who enjoy less privilege in Canadian society. First, carding is a pernicious problem in Canada and there is a risk that law enforcement officers, or other public officers, might demand to see a person’s app to assess whether that person has been exposed to COVID-19. With an unlocked device in hand, officers could search through the device for potentially incriminating materials they otherwise would not have been able to access; these kinds of activities would be a continuation of the enhanced and often illegal searches that Black-identifying Canadians are often subjected to. A recent report from the Canadian Civil Liberties Association found that law enforcement agencies have disproportionately applied law throughout the pandemic to “Black, Indigenous, and other racialized groups, those with precarious housing, recent immigrants, youth, members of the LGBTQ2S community, and certain religious minorities.” It is reasonable to worry that over-policing will extend to so-called “exposure checks” that then turn into smartphone fishing expeditions.

Second, private organizations, such as businesses, may also demand that individuals reveal their COVID-19 exposure status before entering workplaces. Some individuals, such as those who cannot afford a sufficiently up-to-date smartphone or who have lost their phone and cannot afford to replace it, may be denied access to employment. Similarly, if showing one’s COVID-19 status is a prerequisite to entering a shop, these same people may be denied access to grocery stores, pharmacies or other essential businesses.

Some Canadians may regard the aforementioned risks as merely theoretical, or as too high a bar to climb in a time of crisis. Such a response, however, misses the very point: the potential harms are linked to implicit social biases and structural inequality that means some in Canadian society have to worry about these risks, whereas others do not. When Canadian leaders assert that they want to build more inclusive societies, the aforementioned issues associated with the COVID app lay bare social inequity and demonstrate the need for government to explain how it expects to ameliorate these inequities through policy and law. Ignoring these inequities is not an option for a truly inclusive society.

COVID Alert and Inclusive Policy

In the excellent accessibility documentation that accompanies the COVID Alert app, the Canadian Digital Service acknowledges that:

“Some people may have phones or operating systems that do not support downloading the app. And some people may not have smart phones at all. Many people may not have affordable access to the Internet, and the app needs an Internet connection at least once a day to work. … COVID Alert is one part of our public health effort to limit COVID-19. The app does not replace manual contact tracing by local public health authorities. Manual contact tracing is available to everyone in Canada, along with other important resources.”

This acknowledgement is important, and positive, insofar as it showcases that the developers recognize the app’s shortcomings and make clear that other resources are available to Canadians to mitigate the spread of COVID-19. But the governments of Canada and Ontario can go much further to address these limitations, as well as the potential harms linked with the COVID Alert app.

First, governments of Canada can pass legislation that bars public officials, as well as private individuals or organizations, from demanding that individuals install the application or compelling individuals to disclose any information from their COVID-19 app. This legislation could make it a criminal offence to issue such a request in order to prevent police, social workers, landlords, retail staff or others from conducting “exposure checks” that can be used to discriminate against minority populations or less advantaged members of society. Not only would such legislation bar bad behaviour by punishing individuals who inappropriately access information on smartphones, but it might increase trust in the application by firmly giving individuals genuine control over the information held in the app.

Second, the federal and provincial governments can rapidly explain how they will ensure that there is equity in the kinds of health responses that are provided to all Canadians, including those who are less affluent or privileged. Given that governments are unlikely to supply less-advantaged residents of Canada with smartphones that can run the COVID-19 app or subsidize their purchase, the government could explain what other policies will be implemented to ensure that all Canadians enjoy health monitoring; this might, as an example, include increased availability of testing in less affluent communities, focused public outreach conducted through local health authorities and community groups, or broader efforts to meaningfully invest in the social determinants of health that are known to increase health resiliency.

Third, and relatedly, the governments should rapidly release information about how, specifically, the federal and provincial departments of health will assess the success or efficacy of the COVID Alert app. Canadians deserve to know how the government is modelling success and failure, and how the government is accounting for the fact that many less affluent and older residents of Canada lack smartphones capable of installing the COVID Alert app. Without clear success or failure criteria, the COVID Alert app risks becoming a prop in “pandemic theatre” as opposed to a demonstrably effective tool to mitigate the spread of the disease. Given that public and private groups had time to assess the app’s privacy and security properties, it is shocking that health officials have yet to explain how the app’s utility should be measured.

In summary, the technical teams that developed the application, the bodies responsible for assessing the app’s security, and the privacy commissioners’ offices have all performed admirably. The overlapping accountability regimes surrounding the app should provide confidence to Canadians that the app itself will not be used to nefariously collect data, and the app will be decommissioned once shown to be ineffective or no longer needed. But more is needed. Governments that have committed to inclusive policy design must go beyond making the design of the technology accessible, to making it accessible for all people to either safely access and use, or to have access to equivalent public health protections. Governments in Canada must focus on building up trust and proving that public health efforts are being designed to protect all residents of Canada, and especially those most detrimentally affected by the pandemic. The time for action is now.

(This article was first published by First Policy Response.)

Government Surveillance Accountability: The Failures of Contemporary Interception Reports

Photo by Gilles Lambert on Unsplash

Over the past several years I’ve undertaken research exploring how, how often, and for what reasons governments in Canada access telecommunications data. As one facet of this line of research I worked with Dr. Adam Molnar to understand the regularity at which policing agencies across Canada have sought, and obtained, warrants to lawfully engage in real-time electronic surveillance. Such data is particularly important given the regularity at which Canadian law enforcement agencies call for new powers; how effective are historical methods of capturing communications data? How useful are the statistics which are tabled by governments? We answer these questions in a paper published with the Canadian Journal of Law and Technology, entitled ‘Government Surveillance Accountability: The Failures of Contemporary Canadian Interception Reports.” The abstract, follows, as do links to the Canadian interception reports upon which we based our findings.

Abstract:

Real time electronic government surveillance is recognized as amongst the most intrusive types of government activity upon private citizens’ lives. There are usually stringent warranting practices that must be met prior to law enforcement or security agencies engaging in such domestic surveillance. In Canada, federal and provincial governments must report annually on these practices when they are conducted by law enforcement or the Canadian Security Intelligence Service, disclosing how often such warrants are sought and granted, the types of crimes such surveillance is directed towards, and the efficacy of such surveillance in being used as evidence and securing convictions.

This article draws on an empirical examination of federal and provincial electronic surveillance reports in Canada to examine the usefulness of Canadian governments’ annual electronic surveillance reports for legislators and external stakeholders alike to hold the government to account. It explores whether there are primary gaps in accountability, such as where there are no legislative requirements to produce records to legislators or external stakeholders. It also examines the extent to which secondary gaps exist, such as where there is a failure of legislative compliance or ambiguity related to that compliance.

We find that extensive secondary gaps undermine legislators’ abilities to hold government to account and weaken capacities for external stakeholders to understand and demand justification for government surveillance activities. In particular, these gaps arise from the failure to annually table reports, in divergent formatting of reports between jurisdictions, and in the deficient narrative explanations accompanying the tabled electronic surveillance reports. The chronic nature of these gaps leads us to argue that there are policy failures emergent from the discretion granted to government Ministers and failures to deliberately establish conditions that would ensure governmental accountability. Unless these deficiencies are corrected, accountability reporting as a public policy instrument threatens to advance a veneer of political legitimacy at the expense of maintaining fulsome democratic safeguards to secure the freedoms associated with liberal democratic political systems. We ultimately propose a series of policy proposals which, if adopted, should ensure that government accountability reporting is both substantial and effective as a policy instrument to monitor and review the efficacy of real-time electronic surveillance in Canada.

Canadian Electronic Surveillance Reports

Alberta

British Columbia

Government of Canada

Manitoba

New Brunswick

Newfoundland

Nova Scotia

Ontario

Quebec

Saskatchewan

Shining a Light on the Encryption Debate: A Canadian Field Guide

The Citizen Lab and the Canadian Internet Policy and Public Interest Clinic (CIPPIC) have released a joint collaborative report, “Shining a Light on the Encryption Debate: A Canadian Field Guide,” which was written by Lex Gill, Tamir Israel, and myself. We argue that access to strong encryption is integral to the defense of human rights in the digital era. Encryption technologies are also essential to securing digital transactions, securing public safety, and protecting national security interests. Unfortunately, many state agencies have continues to argue that encryption poses insurmountable or unacceptable barriers to their investigative- and intelligence-gathering activities. In response, some governments have advanced irresponsible encryption policies that would limit the public availability and use of secure, uncompromised encryption technologies.

Our report examines this encryption debate, paying particular attention to the Canadian context. It provides insight and analyses for policy makers, lawyers, academics, journalists, and advocates who are trying to understand encryption technologies and the potential viability and consequences of different policies pertaining to encryption.

Section One provides a brief primer on key technical principles and concepts associated with encryption in the service of improving policy outcomes and enhancing technical literacy. In particular, we review the distinction between encryption at rest and in transit, the difference between symmetric and asymmetric encryption systems, the issue of end-to-end encryption, and the concept of forward secrecy. We also identify some of the limits of encryption in restricting the investigative or intelligence-gathering objectives of the state, including in particular the relationship between encryption and metadata.

Section Two explains how access to strong, uncompromised encryption technology serves critical public interest objectives. Encryption is intimately connected to the constitutional protections guaranteed by the Canadian Charter of Rights and Freedoms as well as those rights enshrined in international human rights law. In particular, encryption enables the right to privacy, the right to freedom of expression, and related rights to freedom of opinion and belief. In an era where signals intelligence agencies operate with minimal restrictions on their foreign facing activities, encryption remains one of the few practical limits on mass surveillance. Encryption also helps to guarantee privacy in our personal lives, shielding individuals from abusive partners, exploitative employers, and online harassment. The mere awareness of mass surveillance exerts a significant chilling effect on freedom of expression. Vulnerable and marginalized groups are both disproportionately subject to state scrutiny and may be particularly vulnerable to these chilling effects. Democracies pay a particularly high price when minority voices and dissenting views are pressured to self-censor or refrain from participating in public life. The same is true when human rights activists, journalists, lawyers, and others whose work demands the ability to call attention to injustice, often at some personal risk, are deterred from leveraging digital networks in pursuit of their activities. Unrestricted public access to reliable encryption technology can help to shield individuals from these threats. Efforts to undermine the security of encryption in order to facilitate state access, by contrast, are likely to magnify these risks. Uncompromised encryption systems can thus foster the security necessary for meaningful inclusion, democratic engagement, and equal access in the digital sphere.

Section Three explores the history of encryption policy across four somewhat distinct eras, with a focus on Canada to the extent the Canadian government played an active role in addressing encryption. The first era is characterized by the efforts of intelligence agencies such as the United States National Security Agency (NSA) to limit the public availability of secure encryption technology. In the second era of the 1990s, encryption emerged as a vital tool for securing electronic trust on the emerging web. In the third era—between 2000 and 2010—the development and proliferation of strong encryption technology in Canada, the United States, and Europe progressed relatively unimpeded. The fourth era encompasses from 2011 to the present day where calls to compromise, weaken, and restrict access to encryption technology have steadily reemerged.

Section Four reviews the broad spectrum of legal and policy responses to government agencies’ perceived encryption “problem,” including historical examples, international case studies, and present-day proposals. The section provides an overview of factors which may help to evaluate these measures in context. In particular, it emphasizes questions related to: (1) whether the proposed measure is truly targeted and avoids collateral or systemic impacts on uninvolved parties; (2) whether there is an element of conscription or compelled participation which raises an issue of self-incrimination or unfairly impacts the interests of a third party; and (3) whether, in considering all the factors, the response remains both truly necessary and truly proportionate. The analysis of policy measures in this sections proceeds in three categories. The first category includes measures designed to limit the broad public availability of effective encryption tools. The second category reviews measures that are directed at intermediaries and service providers. The third category focuses on efforts that target specific encrypted devices, accounts, or individuals.

Section Five examines the necessity of proposed responses to the encryption “problem.” A holistic and contextual analysis of the encryption debate makes clear that the investigative and intelligence costs imposed by unrestricted public access to strong encryption technology are often overstated. At the same time, the risks associated with government proposals to compromise encryption in order to ensure greater ease of access for state agencies are often grossly understated. When weighed against the profound costs to human rights, the economy, consumer trust, public safety, and national security, such measures will rarely—if ever—be proportionate and almost always constitute an irresponsible approach to encryption policy. In light of this, rather than finding ways to undermine encryption, the Government of Canada should make efforts to encourage the development and adoption of strong and uncompromised technology.

DOWNLOAD THE FULL REPORT

Project Support

This research was led by the Citizen Lab at the Munk School of Global Affairs, University of Toronto, as well as the Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa. This project was funded, in part, by the John D. And Catherine T. MacArthur Foundation and the Ford Foundation.

The authors would like to extend their deepest gratitude to a number of individuals who have provided support and feedback in the production of this report, including (in alphabetical order) Bram Abramson, Nate Cardozo, Masashi Crete-Nishihata, Ron Deibert, Mickael E.B., Andrew Hilts, Jeffrey Knockel, Adam Molnar, Christopher Prince, Tina Salameh, Amie Stepanovich, and Mari Jing Zhou. Any errors remain the fault of the authors alone.

We are also grateful to the many individuals and organizations who gave us the opportunity to share early versions of this work, including Lisa Austin at the Faculty of Law (University of Toronto); Vanessa Rhinesmith and David Eaves at digital HKS (Harvard Kennedy School); Ian Goldberg and Erinn Atwater at the Cryptography, Security, and Privacy (CrySP) Research Group (University of Waterloo); Florian Martin-Bariteau at the Centre for Law, Technology and Society (University of Ottawa); and the Citizen Lab Summer Institute (Munk School of Global Affairs, University of Toronto).

Authors

Lex Gill is a Citizen Lab Research Fellow. She has also served as the National Security Program Advocate to the Canadian Civil Liberties Association, as a CIPPIC Google Policy Fellow and as a researcher to the Berkman Klein Center for Internet & Society at Harvard University. She holds a B.C.L./LL.B. from McGill University’s Faculty of Law.

Tamir Israel is Staff Lawyer at the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic at the University of Ottawa, Faculty of Law. He leads CIPPIC’s privacy, net neutrality, electronic surveillance and telecommunications regulation activities and conducts research and advocacy on a range of other digital rights-related topics.

Christopher Parsons is currently a Research Associate at the Citizen Lab, in the Munk School of Global Affairs with the University of Toronto as well as the Managing Director of the Telecom Transparency Project at the Citizen Lab. He received his Bachelor’s and Master’s degrees from the University of Guelph, and his Ph.D from the University of Victoria.

Transparency in Surveillance: Role of various intermediaries in facilitating state surveillance transparency

‘Communication’ by urbanfeel (CC BY-ND 2.0) at https://flic.kr/p/4HzMbw

Last year a report that I wrote for the Centre for Law and Democracy was published online. The report, “Transparency in Surveillance: Role of various intermediaries in facilitating state surveillance transparency,” discusses how governments have expanded their surveillance capabilities in an effort to enhance law enforcement, foreign intelligence, and cybersecurity powers and the implications of such expansions. After some of these powers are outlined and the impact on communicating parties clarified, I explore how the voluntary activities undertaken by communications intermediaries can also facilitate government surveillance activities. However, while private companies can facilitate government surveillance they can also facilitate transparency surrounding the surveillance by proactively working to inform their users about government activities. The report concluded by discussing the broader implications of contemporary state surveillance practices, with a focus on the chilling effects that these practices have on social discourse writ large.

Cite as: Parsons, Christopher. (2016). “Transparency in Surveillance: Role of various intermediaries in facilitating state surveillance transparency,” Centre for Law and Democracy. Available at: http://responsible-tech.org/wp-content/uploads/2016/06/Parsons.pdf

Read “Transparency in Surveillance: Role of various intermediaries in facilitating state surveillance transparency

Computer network operations and ‘rule-with-law’ in Australia

‘Cyberman’ by Christian Cable (CC BY-NC 2.0) at https://flic.kr/p/3JuvWv

Last month a paper that I wrote with Adam Molnar and Erik Zouave was published by Internet Policy Review. The article, “Computer network operations and ‘rule-with-law’ in Australia,” explores how the Australian government is authorized to engage in Computer Network Operations (CNOs). CNOs refer to government intrusion and/or interference with network information communications infrastructures for the purposes of law enforcement and national security operations.

The crux of our argument is that Australian government agencies are relatively unconstrained in how they can use CNOs. This has come about because of overly permissive, and often outdated, legislative language concerning technology that has been leveraged in newer legislation that expands on the lawful activities which government agencies can conduct. Australian citizens are often assured that existing oversight or review bodies — vis a vis legislative assemblies or dedicated surveillance or intelligence committees — are sufficient to safeguard citizens’ rights. We argue that the laws, as currently written, compel review and oversight bodies to purely evaluate the lawfulness of CNO-related activities. This means that, so long as government agencies do not radically act beyond their already permissive legislative mandates, their oversight and review bodies will assert that their expansive activities are lawful regardless of the intrusive nature of the activities in question.

While the growing capabilities of government agencies’ lawful activities, and limitations of their review and oversight bodies, have commonalities across liberal democratic nations, Australia is in a particularly novel position. Unlike its closest allies, such as Canada, the United States, New Zealand, or the United Kingdom, Australia does not have a formal bill of rights or a regional judicial body to adjudicate on human rights. As we write, “[g]iven that government agencies possess lawful authority to conduct unbounded CNO operations and can seek relatively unbounded warrants instead of those with closely circumscribed limits, the rule of law has become distorted and replaced with rule of law [sic]”.

Ultimately, CNOs represent a significant transformation and growth of the state’s authority to intrude and affect digital information. That these activities can operate under a veil of exceptional secrecy and threaten the security of information systems raises questions about whether the state has been appropriately restrained in exercising its sovereign powers domestically and abroad: these powers have the capability to extend domestic investigations into the computers of persons around the globe, to facilitate intelligence operations that target individuals and millions of persons alike, and to damage critical infrastructure and computer records. As such, CNOs necessarily raise critical questions about the necessity and appropriateness of state activities, while also showcasing the state’s lack of accountability to the population is is charged with serving.

Read the “Computer network operations and ‘rule-with-law’ in Australia” at Internet Policy Review.