Apple will report images of child sexual abuse detected on iCloud to law enforcement.
Apple has announced that it will scan photos in the iPhone to watch out for any child sexual abuse material (CSAM) that refers to sexually explicit content involving a child. Apple will use new technology to detect any CSAM images stored in the iCloud Photos.
In a statement, Apple said it is introducing new measures to help protect children from predators and prevent the sharing of CSAM images. The CSAM detection tool will work in three areas: Photos, Siri and Search, and Messages.
Apple’s bold steps for child sexual abuse
The new system will detect images called Child Sexual Abuse Material (CSAM) using a process called hashing, where images are transformed into unique numbers that correspond to that image.
Apple started testing the system on Thursday, but most U.S. iPhone users won’t be part of it until an iOS 15 update later this year, Apple said.
The move brings Apple in line with other cloud services which already scan user files, often using hashing systems, for content that violates their terms of service, including child exploitation images.
It also represents a test for Apple, which says that its system is more private for users than previous approaches to eliminating illegal images of child sexual abuse. It uses sophisticated cryptography on Apple’s servers and user devices and doesn’t scan actual images, only hashes.
But many privacy-sensitive users still recoil from software that notifies governments about the contents on a device or in the cloud and may react negatively to this announcement, especially since Apple has vociferously defended device encryption and operates in countries with fewer speech protections than the U.S.
Law enforcement officials worldwide have also pressured Apple to weaken its encryption for iMessage and other software services like iCloud to investigate child exploitation or terrorism. Thursday’s announcement allows Apple to address some of those issues without giving up some of its engineering principles around user privacy.
Read The best deal on Apple products for a limited time.
Let’s uncover this technology
Before an image is stored in Apple’s iCloud, Apple matches the image’s hash against a database of hashes provided by the National Center for Missing and Exploited Children (NCMEC). That database will be distributed in the code of iOS, beginning with an update to iOS 15. The matching process is done on the user’s iPhone, not in the cloud, Apple said.
If Apple detects a certain number of violating files in an iCloud account, the system will upload a file that allows Apple to decrypt and see the images on that account. A person will manually review the images to confirm whether or not there’s a match.
Apple will only be able to review images that match the content already known and reported to these databases — it won’t be able to detect parents’ photos of their kids in the bath, for example, as these images won’t be part of the NCMEC database.
If the person doing the manual review concludes the system did not make an error, then Apple will disable the user’s iCloud account and send a report to NCMEC or notify law enforcement if necessary. An Apple representative said users could file an appeal to Apple if they think their account was flagged by mistake.
The system only works on images uploaded to iCloud, which users can turn off, Apple said. Photos or other images on a device that haven’t been uploaded to Apple servers won’t be part of the system.
Some security researchers have raised concerns that they could eventually use this technology to identify other kinds of images, such as photos of a political protest. However, Apple said its system is built to work only with images cataloged by NCMEC or other child safety organizations. In addition, the way it builds cryptography prevents it from being used for other purposes.
Read Elon Musk wants to become Apple’s new CEO?
Apple can’t add additional hashes to the database, it said. However, Apple said it is presenting its system to cryptography experts to certify that it can detect illegal child exploitation images without compromising user privacy.
Apple unveiled the feature on Thursday along with other features intended to protect children from predators. In a separate feature, Apple will use machine learning on a child’s iPhone with a family account to blur images that may contain nudity, and parents can choose to be alerted when a child under 13 receives sexual content in iMessage. Apple also updated Siri with information about how to report child exploitation.
The Bottom Line
“The reality is that privacy and child protection can co-exist,” John Clark, president, and CEO of the National Center for Missing & Exploited Children, said in a statement. “We applaud Apple and look forward to working together to make this world a safer place for children.”
The announcement is part of a greater push around child safety from the company. Apple said Thursday, and a new communication tool will also warn users under age 18 when they’re about to send or receive a message with an explicit image.
The tool, which has to be turned on in Family Sharing, uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. Parents with children under 13 can additionally turn on a notification feature if a child is about to send or receive a nude image. Apple said it would not get access to the messages.
That tool will be available as a future software update, according to the company.