The Problems and Complications of Apple Monitoring for Child Sexual Abuse Material in iCloud Photos

pexels-photo-1294886.jpeg
Photo by Mateusz Dach on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of the devices that it sells in a stated intent to expand protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images using the Messages application. The second will monitor for the presence of Child Sexual Abuse Material (CSAM) in iCloud Photos. The third will monitor for searches pertaining to CSAM. These features are planned to be activated in the United States in the next versions of Apple’s operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of iCloud Photos for CSAM content. I begin with a background of Apple’s efforts to monitor for CSAM content on their services before providing a description of the newly announced CSAM surveillance system. I then turn to outline some problems, complications, and concerns with this new child safety feature. In particular, I discuss the challenges facing Apple in finding reputable child safety organizations with whom to partner, the potential ability to region-shift to avoid the surveillance, the prospect of the surveillance system leading to ongoing harms towards CSAM survivors, the likelihood that Apple will expand the content which is subject to the company’s surveillance infrastructure, and the weaponization of the CSAM surveillance infrastructure against journalists, human rights defenders, lawyers, opposition politicians, and political dissidents. I conclude with a broader discussion of the problems associated with Apple’s new CSAM surveillance infrastructure.

A previous post focused on the surveillance children’s messages to monitor for sexually explicit photos. Future posts will address the third child safety feature that Apple has announced, as well as the broader implications of Apple’s child safety initiatives.

Background to Apple Monitoring for CSAM

Apple has previously worked with law enforcement agencies to combat CSAM though the full contours of that assistance are largely hidden from the public. In May 2019, Mac Observer noted that the company had modified their privacy policy to read, “[w]e may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material” (emphasis not in original). Per Forbes, Apple places email messages under surveillance when they are routed through its systems. Mail is scanned and if CSAM content is detected then Apple automatically prevents the email from reaching its recipient and assigns an employee to confirm the CSAM content of the message. If the employee confirms the existence of CSAM content the company subsequently provides subscriber information to the National Center for Missing and Exploited Children (NCMEC) or a relevant government agency.1

Continue reading

Apple’s Monitoring of Children’s Communications Content Puts Children and Adults at Risk

pexels-photo-193004.jpeg
Photo by Torsten Dettlaff on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.

Sexually Explicit Image Surveillance in Messages

Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.

Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.

Continue reading

Every Step You Fake: A Comparative Analysis of Fitness Tracker Privacy and Security

Every Step You Fake CoverCanadians, and many people around the world, are increasingly purchasing and using electronic devices meant to capture and record their relative levels of fitness. Contemporary fitness trackers collect a broad range of data, and can include the number of floors climbed, levels and deepness of sleep, how many steps taken and distance travelled over a day, heart rates, and more. All of this data is of interest to the wearers of the devices, to companies interested in mining and selling collected fitness data, to insurance companies, to authorities and courts of law, and even potentially to criminals motivated to steal or access data retained by fitness companies.

Given the potential privacy implications associated with fitness trackers, Andrew Hilts (Open Effect/Citizen Lab), Jeffrey Knockel (University New Mexico/Citizen Lab), and I investigated the kinds of information that are collected by the companies which develop and sell some of the most popular wearable fitness trackers in North America. We were motivated to specifically understand:

  • Whether data which are technically collected by the wearable devices was noted in the companies’ privacy policies and terms of service and, if so, what protections or assurances individuals had concerning the privacy or security of that data?
  • If fitness and other collected data was classified as ‘personal’ data by the companies in question?
  • Whether the information received by the individual matched what a company asserted was ‘personally identifiable information’ in their terms of service or privacy policies.

Our analysis depended on a mixed methodology of technical research, policy analysis, and legal/policy testing. Some of our core findings included:

  • All studied fitness trackers except the Apple Watch were vulnerable to Bluetooth MAC address surveillance
  • Garmin, Withings, and Bellabeat applications failed to use transit-level security for one or more data transmissions, leaving user data exposed.
  • The Jawbone UP application routinely sent out the user’s precise geolocation for reasons not made obvious to the user.
  • Fitness tracking companies gave themselves broad rights to utilize — and in some cases, sell — consumer’s fitness data
  • Data collected by fitness tracking companies did not necessarily match with what can be obtained through an access request.

This research was funded by the Office of the Privacy Commissioner of Canada’s Contributions Program, with additional contributions from the Citizen Lab at the Munk School of Global Affairs, at the University of Toronto. Open Effect has created a webpage dedicated to the report and its impacts.

Download the Report (Alternate Link)

Review of Desk.PM’s Publishing App (v. 1.0)

Desk.pmI downloaded a copy of Desk last week, an OS X applications that is designed for bloggers by bloggers. It costs $30 from the Mac App Store, which is in line with other blogging software for OS X.

To cut to the chase, I like the application but, as it stands right now, version 1.0 feels like it’s just barely out of beta. As a result there’s no way that I could recommend that anyone purchase Desk until a series of important bug fixes are implemented.

What’s to Love

I write in Markdown. At this point it’s so engrained in how I stylize my writing that even my paper notebooks (yes, I still use those…) prominently feature Markdown so I can understand links, heading levels, levels of emphasis, and so forth. Desk uses Markdown and also offers a GUI where, after highlighting some text, you’re given the option to stylize add boldface or italics, insert a hyperlink, or generally add in some basic HTML. That means that people like me (Markdown users) are happy as are (presumably) those who prefer working from a graphical user interface. Everyone wins!

In line with other contemporary writing applications (e.g. Byword, Write) the menu options are designed to just fade away while you’re writing. This means there are no distractions when you’re involved in writing itself and that’s a good thing. You always have the option to calling up the menu items just by just scrolling somewhere in the main window. So, the menu is there when you want it and absent when you’re actually working. Another win.

Continue reading

How to Dispel the Confusion Around iMessage Security

Image by Graham BrennaApple’s hardware and communications products continue to be widely purchased and used by people around the world. Comscore reported in March 2013 that Apple enjoyed a 35% market penetration in Canada, and their desktop and mobile computing devices remain popular choices for consumers. A messaging service, iMessage, spans the entire Apple product line. The company has stated that it “cannot decrypt that data.”

Apple’s statements concerning iMessage’s security are highly suspect. In what follows I summarize some of the serious questions about Apple’s encryption schemas. I then discuss why it’s important for consumers to know whether iMessages are secure from third-party interception. I conclude by outlining how Canadians who use the iMessage application can use Canadian privacy law to ascertain the validity of Apple’s claims against those of the company’s critics. Continue reading

iPhone Promiscuity

Photo credit: Steve KeysI’ve written a fair bit about mobile phones; they’re considerable conveniences that are accompanied by serious security, privacy, and technical deficiencies. Perhaps unsurprisingly, Apple’s iPhone has received a considerable amount of criticism in the press and by industry because of the Apple aura of producing ‘excellent’ products combined with the general popularity of their mobile device lines.

In this short post I want to revisit two issues I’ve previously written about: the volume of information that the iPhone emits when attached to WiFi networks and its contribution to carriers’ wireless network congestion. The first issue is meant to further document here, for my readers and my own projects, just how much information the iPhone makes available to third-parties. The second, however, reveals that a technical solution resolves the underlying cause of wireless congestion associated with Apple products. Thus, trapping customers into bucket-based data plans in response to congestion primarily served financial bottom lines instead of customers’ interests. This instance of leveraging an inefficient (economic) solution to a technical problem might, then, function as a good example of the difference between ‘reasonable technical management’ that is composed of technical and business goals versus the management of just the network infrastructure itself.

Continue reading