Apple’s Monitoring of Children’s Communications Content Puts Children and Adults at Risk

pexels-photo-193004.jpeg
Photo by Torsten Dettlaff on Pexels.com

On August 5, 2021, Apple announced that it would soon begin conducting pervasive surveillance of devices that they sell with a stated intent of expanding protections for children. The company announced three new features. The first will monitor for children sending or receiving sexually explicit images over the Messages application, the second will monitor for the reception or collection of Child Sexual Abuse Material (CSAM), and the third will monitor for searches pertaining to CSAM. These features are planned to be activated in the next versions of Apple’s mobile and desktop operating systems which will ship to end-users in the fall of 2021.

In this post I focus exclusively on the surveillance of children’s messages to detect whether they are receiving or sending sexually explicit images. I begin with a short discussion of how Apple has described this system and spell out the rationales for it, and then proceed to outline some early concerns with how this feature might negatively affect children and adults alike. Future posts will address the second and third child safety features that Apple has announced, as well as broader problems associated with Apple’s unilateral decision to expand surveillance on its devices.

Sexually Explicit Image Surveillance in Messages

Apple currently lets families share access to Apple services and cloud storage using Family Sharing. The organizer of the Family Sharing plan can utilize a number of parental controls to restrict the activities that children who are included in a Family Sharing plan can perform. Children, for Apple, include individuals who are under 18 years of age.

Upon the installation of Apple’s forthcoming mobile and desktop operating systems, children’s communications over Apple’s Messages application can be analyzed to assess if the content of the communications include sexually explicit images, if this analysis feature is enabled in Family Sharing. Apple’s analysis of images will occur on-device and Apple will not be notified of whether an image is sexually explicit. Should an image be detected it will initially be blurred out, and if a child wants to see the image they must proceed through either one or two prompts, depending on their age and how their parents have configured the parental management settings.

The first prompt (which will apply to all individuals under 18 years of age, and who are members of a Family Sharing plan where this feature has been activated) will confirm that the individual wants to either receive an explicit sexual image or that they want to send such an image. The second prompt, which can only be set for children who are registered as being 12 years old or younger, will also assert that the child’s parents will be notified that they have either received and viewed, or sent, an explicit image when the parent has activated this feature in the parental management settings.1 In the latter case, Apple’s current documentation does not indicate that parents will automatically receive a copy of the image.

Message prompts, courtesy of Apple

In the case of the former situation, Apple is seemingly attempting to add friction into the process of receiving and sending sexually explicit images, be they sexts or pornographic images more generally. In the latter case with children 12 years old or younger, Apple’s intent is presumably to make parents aware of when children associated with the Family Sharing account either send, or receive, sexually explicit material. Child grooming is a particularly serious issue and Alex Stamos has previously discussed how, in his experiences at Yahoo! and Facebook, predators would routinely try to move grooming conversations off those monitored platforms and onto the Messages application to evade the platforms’ detection systems. In theory, alerting parents of the sending or reception of sexually explicit images will mean that they can react to prevent their child from falling prey to someone who might be a child predator, as well as intervene if their child is viewing other pornographic materials.

Notably, Apple has not explicitly disclosed what constitutes a sexually explicit image. In one of the company’s prompts which will be presented to children, sensitive photos are stated as including those that, “show the private parts that you cover with bathing suits.” It’s unclear if all images showing a child’s body constitute sexually explicit material, whether Apple is including broader sexualized imagery, or something else. While this feature will initially be deployed to monitor photos, Apple’s prompt mentions sensitive photos and videos. It’s unclear if the inclusion of ‘videos’ is part of a general education effort, a demonstration that the company plans to expand this monitoring to videos sent over the Messages application, whether the image concealment service will blur the first frame of a video where that frame is sexually explicit, or something else entirely.

There are no specific details published on Apple’s website that announced this feature that explains how on-device machine learning will function. While this monitoring feature will initially only be deployed to US-based Family Sharing plans it will presumably be expanded to other jurisdictions in subsequent months and years.

Potential Consequences of Monitoring for Sexually Explicit Image Surveillance in Messages

In excess of Apple not disclosing how their surveillance will technically operate the company has also not addressed or engaged with the potential consequences of this mode of surveillance.

Outing LBGTQ2+ Children

We can imagine a situation where a parent is alerted that their child is sending or receiving explicit sexual images. In this situation, the child may recognize that they are LBGTQ2+ but not have revealed this to their parents. Apple’s surveillance may lead some parents to discover their child’s sexual identity by examining the images they are sending or receiving. Alternately, the child may be exploring their identity and as part of this exploration send or receive sexually explicit images. In either case, such a revelation to the child’s parents could have potentially serious consequences for outing the child’s identity, or identity explorations, against their own wishes and with potentially serious physical or psychological consequences. Apple’s functionality will have the effect of endangering and harming some children.

Reducing Children’s Security

Children of all ages are well known for evading parental surveillance and we should expect them to do the same when this parental control is deployed. One thing that children might do, as an example, is store images on other cloud providers’ systems and send links to online albums.2 One of the advantages of Apple’s infrastructure is that it is relatively well secured from third-party intrusion and the company’s security team is generally well regarded.

Forcing children to use less-secured cloud infrastructures may enhance the likelihood that the sexually explicit images they create or share might be exposed in the event of a data breach. In effect, Apple’s attempts to incentivize children to not send sexually explicit images may encourage them to use less-secure methods to sending these images, potentially with serious long-term harms should those images be exposed by the company they use to store those images. These harms include the potential that their own images, or images of other children, are exposed as well as potentially raising the likelihood of being harassed for the kinds of pornographic images they have received or shared.

Challenges for Well-Meaning Parents

Parents who are notified that their pre-teen children are receiving or sending sexually explicit images will be faced with challenging questions. The following is the smallest subset of them:

  • How do they ‘have a conversation’ with a child who may be engaging in what is illegal behaviour, should they be receiving or sending sexually explicit images that are classified as child pornography where they live?
  • What do they do if their child has sent a sexually explicit image of themselves to another person?
  • What are they supposed to do should they discover child grooming? How do they report a person contacting their child?
  • As the ‘owner’ of the online cloud storage that the images might reside within (insofar as the parents are paying the monthly storage costs) what legal jeopardy might the monitoring system place them under?
  • Is there a risk that a parent reporting that their child has sent sexually explicit images of themselves will result in child services intervening, potentially at the risk of their children being taken away by the state?
  • What are the consequences for a parent not reporting that their child has received grooming messages–despite a parent having ended the grooming efforts–should the child mention they were groomed to a teacher or other party who is obligated to report such incidents to law enforcement?

These are all relatively obvious questions but Apple does not appear to be providing guidance to parents who may be soon confronting them.

Apple-Designed Stalkerware Enhancements

Family management systems are often used by abusive partners to exert control over their partners. If an abusive partner controls the Family Sharing plan they might enrol their partner as a pre-teen child. In doing so, they will obtain yet another means of exerting control in the relationship while they are in it, and may also extend control after an abused partner escapes the physical presence of their abuser. The abusive individual might also enrol their partner as a teen that is under the age of 17 simply to exert control over what their partner can send or receive on the Apple devices they use.

When individuals escape abusive relationships it can be very dangerous to sever the digital surveillance that they are subjected to because doing so runs the risk of amplifying risks of physical violence. As such, individuals who are part of a Family Sharing plan may be even less able to exert their autonomy with a system such as this if their abusive former partner will be alerted whenever the controlled person either receives or sends a sexually explicit image. In short, Apple’s efforts to protect children may further endanger adults who are already in precarious or dangerous intimate relationships and further erode their autonomy.

Discussion

Apple has unilaterally made a number of decisions concerning the surveillance that children–including teenagers–may be subjected to when they use the Messages application. This has notable consequences in terms of outing LBGTQ2+ children, potentially pushing children into less secure spaces to upload or share sexually explicit images, raising serious legal questions for parents, and enhancing the stalkerware capabilities of the Family Sharing service. It also fundamentally normalizes persistent surveillance of children in the name of keeping them safe.

Apple has demonstrated that they are willing to undermine the privacy afforded to pre-teens, in particular, in the name of ostensibly looking out for their interests. However, the infringements to privacy do not need to stop with children and monitoring for sexually explicit images, and it is easy to imagine a situation where local machine learning is used to detect language that is socially regarded as problematic or track links which are shared. Such surveillance does not need to be limited to material that is seen as exclusively dangerous towards children and could include extremist messaging, or other kinds of information that is regarded as a threat to national security. While Apple’s surveillance system will apply to images upon the system’s release there is no reason to expect that it will stop there: presumably it may be expanded, over time, to include movies and other media, potentially including written and audio content.

Now that Apple has made this kind of surveillance available on the Messages application other messaging providers that offer strong, end-to-end, encryption will certainly experience heightened pressures to adopt systems paralleling Apple’s. And they, like Apple, will not just be pressured to monitor for sexually explicit images but a far wider and more diverse set of content that is often based on local or regional customs. In countries where it remains illegal to express one’s LBGTQ2+ identity, governments might demand that child monitoring systems include any and all LBGTQ2+ content, and the same could be true in countries that oppress certain religions. In these situations Apple and other messaging companies might be compelled to notify parents when their children are exposed to, or engaged in, such ‘deviant’ behaviours and communications.

While Apple plans to first deploy this surveillance on accounts registered in the United States it will presumably be expanded to other jurisdictions in the subsequent months and years. As Apple expands their new Family Sharing and surveillance system into additional jurisdictions they will face clear and obvious pressures from governments to apply the machine learning-based surveillance to a wider range of content than just sexually explicit images. The same will be true of all messaging companies that want to provide private communications that are not subject to state intervention.

Conclusion

To be clear, issues like sextortion and grooming are very real and and can lead to profound harms. The Messages application surveillance that Apple plans to deploy may enable parents to be more aware of efforts by adults or other children to extort sexually explicit images from children. Experts have debated the correct approach to assisting children in these situations but Apple has unilaterally decided to automate Messages surveillance rather than enabling children, themselves, to self-identity content and alert their parent, or report it to groups that assist children who are being sextorted, bullied, or harassed. There will also be a range of potentially unintended consequences associated with the monitoring for explicit sexual images, including consequences which will negatively affect children.

In announcing these new child safety features Apple has radically transformed the nature of the debate surrounding messaging privacy, and for the worse. Pandora’s box likely cannot be closed after Apple’s decision to open communications content to surveillance. For a company that prides itself on protecting individuals’ privacy it is shocking to see it adopt a practice that undermines privacy while, at the same time, their actions will further endanger the security of their most vulnerable users.


Footnotes

  1. This is not a declared feature in the public facing documentation, but is reported by trusted sources↩︎
  2. Currently, Apple’s surveillance does not seem built to pre-scan albums before an individual accesses them, though this is a feature they might adopt in the future. ↩︎