On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.
In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.
Sexually Explicit Image Surveillance in Messages
Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.
Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.
It is widely expected that Canadians will be going to the polls in the next few months. In advance of the election the Canadian Security Intelligence Service (CSIS) has published an unclassified report entitled, “Foreign Interference: Threats to Canada’s Democratic Process.”1
In this post I briefly discuss some of the highlights of the report and offer some productive criticism concerning who the report and its guidance is directed at, and the ability for individuals to act on the provided guidance. The report ultimately represents a valuable contribution to efforts to increase the awareness of national security issues in Canada and, on that basis alone, I hope that CSIS and other members of Canada’s intelligence and security community continue to publish these kinds of reports.
Summary
The report generally outlines a series of foreign interference-related threats that face Canada, and Canadians. Foreign interference includes, “attempts to covertly influence, intimidate, manipulate, interfere, corrupt or discredit individuals, organizations and governments to further the interests of a foreign country” and are, “carried out by both state and non-state actors” towards, “Canadian entities both inside and outside of Canada, and directly threaten national security” (Page 5). The report is divided into sections which explain why Canada and Canadians are targets of foreign interference, the types of foreign states’ goals, who might be targeted, and the techniques that might be adopted to apply foreign interference and how to detect and avoid such interference. The report concludes by discussing some of the election-specific mechanisms that have been adopted by the Government of Canada to mitigate the effects and effectiveness of foreign interference operations.
On the whole this is a pretty good overview document. It makes a good academic teaching resource, insofar as it provides a high-level overview of what foreign interference can entail and would probably serve as a nice kick off to discuss the topic of foreign interference more broadly.2
On Monday, the Canadian government imposed mandatory national security risk assessments on scholarly research. The new rules apply to projects that receive funding from the Natural Sciences and Engineering Research Council (NSERC) and involve foreign researchers or private-sector organizations. The stated intent of the assessments is to prevent intellectual property from being stolen and ensure that Canadian researchers do not share industrial, military or intelligence secrets with foreign governments or organizations to the detriment of Canadian interests. But they will chill research and scholarly training, accentuate anti-immigrant biases and may amplify national security problems.
In brief, these assessments add an analysis of national security issues into the process of funding partnerships by compelling researchers to evaluate whether their work is “sensitive.” Cutting-edge topics that are considered sensitive include artificial intelligence, biotechnology, medical technology, quantum science, robotics, autonomous systems and space technology. Amongst other criteria, researchers must also assess risks posed by partners, including whether they might disclose information to other groups that could negatively affect Canada’s national security, whether they could be subject to influence from foreign governments or militaries, or if they lack clear explanations for how or why they can supplement funding from NSERC.
If a researcher or their team cannot state there are no risks, they must itemize prospective risks, even in cases where they must speculate. Mitigation processes must explain what security protocols will be established, how information might be restricted on a need-to-know basis, or how collaborators will be vetted. Government documents specifically warn researchers to take care when working with members of the university research community, such as contractors, employees or students.
Whenever research is assessed as raising national security concerns, it may be reviewed by NSERC and Canada’s national security agencies, and research programs may need to be modified or partners abandoned before funding will be released.
These assessments will chill Canadian research. Consider Canadian university professors who are working on artificial intelligence research, but who hold Chinese citizenship and thus could potentially be subject to compulsion under China’s national security legislation. Under the assessment criteria, it would seem that such researchers are now to be regarded as inherently riskier than colleagues who pursue similar topics, but who hold Canadian, American or European citizenship. The assessments will almost certainly reify biases against some Canadian researchers on the basis of their nationality, something that has become commonplace in the United States as Chinese researchers have increasingly been the focus of U.S. security investigations.
Students who could potentially be directly or indirectly compelled by their national governments may now be deemed a threat to Canada’s national security and interests. Consequently, international students or those who have families outside of Canada might be kept from fully participating on professors’ research projects out of national security concerns and lose out on important training opportunities. This stigma may encourage international students to obtain their education outside of Canada.
These assessments may create more problems than they solve. Some Canadian researchers with foreign citizenships might apply for foreign funding to avoid national security assessments altogether. But they may also be motivated to conceal this fact for fear of the suspicion that might otherwise accompany the funding, especially based on how their American counterparts have been targeted in FBI-led investigations. Foreign intelligence services look for individuals who have something to hide to exploit such vulnerabilities. In effect, these assessments may amplify the prospect that researchers will be targeted for recruitment by foreign spy agencies and exacerbate fears of foreign espionage and illicit acquisition of intellectual property.
What must be done? If the government insists on applying these assessments, then NSERC must commit to publishing annual reports explaining how regularly research is assessed, the nature of the assessed research, rationales for assessments and the outcomes. Canada’s national security review agencies will also have to review NSERC’s assessments to ensure that the results are based in fact, not suspicion or bias. Researchers can and should complain to the review agencies and the news media if they believe that any assessment is inappropriate.
Ultimately, Canadian university leaders must strongly oppose these assessments as they are currently written. The chill of national security threatens to deepen suspicions towards some of our world-leading researchers and exceptional international students, and those running universities must publicly stand up for their communities. Their universities’ status as being open and inclusive – and being independent, world-leading research bodies – depends on their advocacy.
CSE potentially violated the Privacy Act, which governs how federal government institutions handle personal information.
The CSE’s assistance to the Canadian Security Intelligence Service (CSIS) was concealed from the Federal Court. The Court was responsible for authorizing warrants for CSIS operations that the CSE was assisting with.
CSE officials may have misled Parliament in explaining how the assistance element of its mandate was operationalized in the course of debates meant to extend CSE’s capabilities and mandate.
In this post I describe the elements of the review, a few key parts of CSE’s response it, and conclude with a series of issues that the review and response raise.
Background
Under the National Defence Act, CSE would incidentally collect CII in the course of conducting foreign signals intelligence, cybersecurity and information assurance, and assistance operations. From all of those operations, it would produce reports that were sent to clients within the Government of Canada. By default, Canadians’ information is expected to be suppressed but agencies can subsequently request CSE to re-identify suppressed information.
NSIRA examined disclosures of CII which took place between July 1, 2015 – July 31, 2019 from CSE to all recipient government departments; this meant that all the disclosures took place when the CSE was guided by the National Defense Act and the Privacy Act.1 In conducting their review NSIRA looked at, “electronic records, correspondence, intelligence reports, legal opinions, policies, procedures, documents pertaining to judicial proceedings, Ministerial Authorizations, and Ministerial Directives of relevance to CSE’s CII disclosure regime” (p. 2). Over the course of its review, NSIRA engaged a range of government agencies that requested disclosures of CII, such as the Royal Canadian Mounted Police (RCMP) and Innovation Science and Economic Development Canada (ISED). NSIRA also assessed the disclosures of CII to CSIS and relevant CSIS’ affidavits to the Federal Court.
On May 12, 2021, President Joseph Biden promulgated an Executive Order (EO) to compel federal agencies to modify and enhance their cybersecurity practices. In this brief post I note a handful of elements of the EO that are noteworthy for the United States and, also, more broadly can be used to inform, assess, and evaluate non-American cybersecurity practices.
The core takeaway, for me, is that the United States government is drawing from its higher level strategies to form a clear and distinct set of policies that are linked to measurable goals. The Biden EO is significant in its scope though it remains unclear whether it will actually lead to government agencies better mitigating the threats which are facing their computer networks and systems.
NSIRA is responsible for conducting national security reviews of Canadian federal agencies, inclusive of “the Canadian Security Intelligence Service (CSIS) and the Communications Security Establishment (CSE), as well as the national security and intelligence activities of all other federal departments and agencies.” The expanded list of departments and agencies includes the Royal Canadian Mounted Police (RCMP), the Canada Border Services Agency (CBSA), the Department of National Defence (DND), Global Affairs Canada (GAC), and the Department of Justice (DoJ). As a result of their expansive mandate, the Agency has access to broad swathes of information about the activities which are undertaken by Canada’s national security and intelligence community.
Despite the potential significance of this breach, little has been publicly written about the possible implications of the unauthorized access. This post acts as an early round of analysis of the potential significance of the access by, first, outlining the kinds of information which may have been accessed by the unauthorized party and, then, raising a series of questions that remain unanswered in NSIRA’s statement. The answers to these questions may dictate the actual seriousness and severity of the cyber-incident.
What is Protected Information?
NSIRA’s unclassified information includes Protected information. Information is classified as Protected when, if compromised, it “could reasonably be expected to cause injury to a non-national interest—that is, an individual interest such as a person or an organization.” There are three classes of protected information that are applied based on the sensitivity of the information. Protected A could, if compromised, “cause injury to an individual, organization or government,” whereas compromising Protect B information could “cause serious injury.” Compromising Protected C information could “cause extremely grave injury”. Protected C information is safeguarded in the same manner as Confidential or Secret material which, respectively, could cause injury or could cause serious injury to “the national interest, defence and maintenance of the social, political, and economic wellbeing of Canada” in the case of either being compromised.
Intrusion into protected networks brings with it potentially significant concerns based on the information which may be obtained. Per Veterans Affairs, employee information associated with Protected A information could include ‘tombstone’ information such as name, home address, telephone numbers or date of birth, personal record identifiers, language test results, or views which if made public would cause embarrassment to the individual or organization. Protected B could include medical records (e.g., physical, psychiatric, or psychological descriptions), performance reviews, tax returns, an individual’s financial information, character assessments, or other files or information that are composed of a significant amount of personal information.
More broadly, Protected A information can include third-party business information that has been provided in confidence, contracts, or tenders. Protected B information in excess of staff information might include that which, if disclosed, could cause a loss of competitive advantage to a Canadian company or could impede the development of government policies such as by revealing Treasury Board submissions.
In short, information classified as Protected could be manipulated for a number of ends depending on the specifics of what information is in a computer network. Theoretically, and assuming that an expansive amount of protected information were present, the information might be used by third-parties to attempt to recruit or target government staff or could give insights into activities that NSIRA was interested in reviewing, or is actively reviewing. Further, were NSIRA either reviewing non-classified government policies or preparing such policies for the Treasury Board, the revelation of such information might advantage unauthorized parties by enabling them to predict or respond to those policies in advance of their being put in place.