Apple’s Touchy Subject
Last week Apple announced that it would roll out a system called neuralMatch that will be an update to iOS 15 later this year. The simple part of the system would watch for searches relating to child sexual abuse through Siri and the Apple search application and would direct the user toward ways to report such abuse or how to get help with such a problem. The second part of the system is a bit more invasive, essentially giving parental control to the messaging application that would blur sexually explicit images for any user under 18 and would notify parents if a child under 12 years old is trying to view or send such pictures. But the third part of the system is where much of the controversy lies as it scans the images in the iCloud Photos folder, and if it finds such material reports it to Apple moderators who have the option to pass it on the National Center for Missing & Exploited Children.
The messaging portion of the system gives parents the choice whether to opt in and if so, an image processor that has been designed to focus on pornography will look for sexually explicit images in the app, obscures the picture, and asks the user if they really want to view or send the image. In the update family account systems will also check to see if the user clicks through the warnings and is under 13 years old, at which point they will receive a warning that their parents will receive a warning, although the actual image will not be seen by the users parents. Parents will be notified if the user continues through the process, either viewing or sending such content, although they will not if the user only receives the images but goes no further. We note that this system actually checks images on the user’s phone rather than in the cloud, which some privacy groups look at a violation of the very privacy rules that have been cited in court cases.
The Apple system does scan images in iCloud accounts, so if you sync to same, all of your images reside on iCloud servers, which are then scanned against a list of ‘known CSAM’ and if enough matches are found the account is sent to a moderator who can confirm, close the account and notify the authorities. Apple is certainly not alone here as many social media companies use a Microsoft (MSFT) tool to scan servers for CSAM but until now Apple has resisted, scanning only iCloud Mail. If the new system is implemented it will differ in that it will check images directly on the user’s iPhone rather than on servers, which brings the privacy question a bit closer to home. Interestingly, the software that actually does the ‘checking’ is not really ‘looking’ at photos as would a facial recognition system, but breaks the images into a hash, or digital signature (a big list of numbers) that represents the key features of the image. That hash value is compared against a list of millions of image hashes provided by the NCMEC and if a match is found an alert is placed on the account.
Apple is want to point out that it is only looking at photos that are synced with an iCloud account, meaning by turning off a user’s iCloud sync will disable the system completely and no image hash generation will occur, but while the system will certainly work toward outing those that traffic in CSAM to a larger degree, others say that upping the game by adding surveillance directly to a user’s iPhone opens a pathway to a way to break end-to-end encryption[1] and disable the privacy and lack of a backdoor that Apple has worked so hard for in the past. There is some nuance here as end-to-end encryption does exist in Apple’s messaging application which means Apple will have no access to the messages themselves, nor will parents, but that is not the case on iCloud servers (not only Apple) which makes it possible for companies to run such CSAM searches.
This difference is Apple’s defense against criticism of the new system, along with the claim that only 1 in 1 trillion accounts will be incorrectly tagged, but there is great fear that other countries will use the Apple precedent as a way to disengage encryption in the name of fighting terrorism or misinformation, and that Apple itself has the power to modify the safeguards it has put into the system. Apple responds as follows:
Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.
This is a very complicated issue that almost all admit is misunderstood by most and has the potential to become highly politicized. It is a very important question that we could not even begin to feel we understand or answer, but when change is underway, we thought it important for our readers to be aware of.
For further reading:
https://www.missingkids.org/HOME; https://www.apple.com/privacy/docs/Building_a_Trusted_Ecosystem_for_Millions_of_Apps.pdf;
https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf;
https://appleprivacyletter.com/
[1] End-to-end encryption is a process by which there is no ‘key’ to decrypt the date being sent between two parties. Lesser systems encrypt in the same way but a company holds a key that would allow encryption.