Apple Announces New Machine Learning-led Child Safety Features

Written by

Tech giant Apple has announced a range of new machine learning-led safety measures designed to protect children from exposure to child abuse materials, including child pornography.

The first of these is a new communication safety feature in Apple’s messages app, in which a warning will pop up when a child who is in an iCloud Family receives or attempts to send sexually explicit photos.

Any such images that are received by children be blurred, and a message will come up stating: “may be sensitive.” If the child then taps “view photo,” a different pop-up message will explain that if they choose to view the image, their iCloud Family parent will receive a notification “to make sure you’re OK.” The pop-up will also contain a link to receive additional help. A similar system is in place for sexually explicit photos a child tries to send.

An on-device machine learning system will analyze the image attachments to determine if a photo is sexually explicit. Apple also confirmed that iMessage remains end-to-end encrypted and that it will not have access to any of the messages.

The opt-in feature will be rolled out “later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey,” starting in the US.

The next measure enables Apple to detect child sexual abuse material (CSAM) stored in iCloud photos before reporting them to the National Center for Missing and Exploited Children (NCMEC). New technology in iOS and iPadOS will be used, enabling on-device matching utilizing a database of known CSAM image hashes provided by the NCMEC. This database is then transformed into an unreadable set of hashes securely stored on users’ devices.

Apple explained that the matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. In addition, there is a different technology, called threshold secret sharing, which aims to safeguard user privacy by ensuring the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.  

Apple stated: “This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM.”

The third new feature announced is the creation of additional resources in Siri and Search that offer advice to children and parents on staying safe online. Additionally, Apple will be updating Siri and Search to intervene when users perform searches for queries related to CSAM. “These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.”

This update will be rolled out later this year “in an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.”

Privacy campaigners have expressed concerns over the use of machine learning in these new features. Chris Hauk, consumer privacy champion at Pixel Privacy, commented: "While I am all for clamping down on child abuse and child pornography, I do have privacy concerns about the use of the technology. A machine learning system such as this could crank out false positives, leading to unwarranted issues for innocent citizens. Such technology could be abused if placed in government hands, leading to its use to detect images containing other types of content, such as photos taken at demonstrations and other types of gatherings. This could lead to the government clamping down on users' freedom of expression and used to suppress "unapproved" opinions and activism."  

What’s hot on Infosecurity Magazine?