11.7 C
New York
Sunday, April 21, 2024

Apple Walks a Privacy Tightrope to Spot Child Abuse in iCloud

logoThe AI Database →

Application

Content moderation

Safety

Company

Apple

Sector

Public safety

Source Data

Images

Technology

Machine learning

Machine vision

For years, tech companies have struggled between two impulses: the need to encrypt users' data to protect their privacy and the need to detect the worst sorts of abuse on their platforms. Now Apple is debuting a new cryptographic system that seeks to thread that needle, detecting child abuse imagery stored on iCloud without—in theory–introducing new forms of privacy invasion. In doing so, it's also driven a wedge between privacy and cryptography experts who see its work as an innovative new solution and those who see it as a dangerous capitulation to government surveillance.

Today Apple introduced a new set of technological measures in iMessage, iCloud, Siri, and search, all of which the company says are designed to prevent the abuse of children. A new opt-in setting in family iCloud accounts will use machine learning to detect nudity in images sent in iMessage. The system can also block those images from being sent or received, display warnings, and in some cases alert parents that a child viewed or sent them. Siri and search will now display a warning if it detects that someone is searching for or seeing child sexual abuse materials, also known as CSAM, and offer options to seek help for their behavior or to report what they found.

But in Apple's most technically innovative—and controversial—new feature, iPhones, iPads, and Macs will now also integrate a new system that checks images uploaded to iCloud in the US for known child sexual abuse images. That feature will use a cryptographic process that takes place partly on the device and partly on Apple's servers to detect those images and report them to the National Center for Missing and Exploited Children, or NCMEC, and ultimately US law enforcement.

Apple argues that none of those new features for dealing with CSAM endanger user privacy—that even the iCloud detection mechanism will use clever cryptography to prevent Apple's scanning mechanism from accessing any visible images that aren't CSAM. The system was designed in collaboration with Stanford University cryptographer Dan Boneh, and Apple's announcement of the feature includes endorsements from several other well-known cryptography experts. 

“I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum,” Benny Pinkas, a cryptographer at Israel’s Bar-Ilan University who reviewed Apple’s system, wrote in a statement to WIRED.

Children's safety groups, for their part, also immediately applauded Apple's moves, arguing they strike a necessary balance that "brings us a step closer to justice for survivors whose most traumatic moments are disseminated online," Julie Cordua, CEO of the child safety advocacy group Thorn, wrote in a statement to WIRED.

Other cloud storage providers, from Microsoft to Dropbox, already perform detection on images uploaded to their servers. But by adding any sort of image analysis to user devices, some privacy critics argue, Apple has also taken a step toward a troubling new form of surveillance and weakened its historically strong privacy stance in the face of pressure from law enforcement.

“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” says Nadim Kobeissi, a cryptographer and founder of the Paris-based cryptography software firm Symbolic Software. “I definitely will be switching to an Android phone if this continues.”

>

Apple’s new system isn’t a straightforward scan of user images, either on the company’s devices or on its iCloud servers. Instead it’s a clever—and complex—new form of image analysis designed to prevent Apple from ever seeing those photos unless they’re already determined to be part of a collection of multiple CSAM images uploaded by a user. The system takes a "hash" of all images a user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older systems of CSAM detection such as PhotoDNA, it compares them with a vast collection of known CSAM image hashes provided by NCMEC to find any matches.

Apple is also using a new form of hashing it calls NeuralHash, which the company says can match images despite alterations like cropping or colorization. Just as crucially to prevent evasion, its system never actually downloads those NCMEC hashes to a user's device. Instead, it uses some cryptographic tricks to convert them into a so-called blind database that's downloaded to the user's phone or PC, containing seemingly meaningless strings of characters derived from those hashes. That blinding prevents any user from obtaining the hashes and using them to skirt the system's detection.

The system then compares that blind database of hashes with the hashed images on the user's device. The results of those comparisons are uploaded to Apple's server in what the company calls a "safety voucher" that's encrypted in two layers. The first layer of encryption is designed to use a cryptographic technique known as privacy set intersection, such that it can be decrypted only if the hash comparison produces a match. No information is revealed about hashes that don't match.

The second layer of encryption is designed so that the matches can be decrypted only if there are a certain number of matches. Apple says this is designed to avoid false positives and ensure that it's detecting entire collections of CSAM, not single images. The company declined to name its threshold for the number of CSAM images it's looking for; in fact, it will likely adjust that threshold over time to tune its system and to keep its false positives to fewer than one in a trillion. Those safeguards, Apple argues, will prevent any possible surveillance abuse of its iCloud CSAM detection mechanism, allowing it to identify collections of child exploitation images without ever seeing any other images that users upload to iCloud.

That immensely technical process represents a strange series of hoops to jump through when Apple doesn't currently end-to-end encrypt iCloud photos, and could simply perform its CSAM checks on the images hosted on its servers, as many other cloud storage providers do. Apple has argued that the process it's introducing, which splits the check between the device and the server, is less privacy invasive than a simple mass scan of server-side images.

But critics like Johns Hopkins University cryptographer Matt Green suspect more complex motives in Apple's approach. He points out that the great technical lengths Apple has gone to to check images on a user's device, despite that process's privacy protections, only really make sense in cases where the images are encrypted before they leave a user's phone or computer and server-side detection becomes impossible. And he fears that this means Apple will extend the detection system to photos on users' devices that aren't ever uploaded to iCloud—a kind of on-device image scanning that would represent a new form of invasion into users' offline storage.

Or, in a more optimistic scenario for privacy advocates, he speculates Apple may be planning to add end-to-end encryption for iCloud, and has created its new CSAM detection system as a way to appease child safety advocates and law enforcement while encrypting its cloud storage such that it can't otherwise access users' photos. "What Apple is doing here is a technology demonstration," Green says. "It's not something they need to scan unencrypted iCloud photos. It's something you need if the photos you're scanning are going to be encrypted in the future."

Privacy advocates have pushed Apple to end-to-end encrypt its iCloud storage for years. But Apple has reportedly resisted the move due to pressure from law enforcement agencies such as the FBI, from whom that encryption would remove a valuable investigative tool. Adding its new CSAM detection system as a precursor to finally encrypting iCloud would thus represent a kind of mixed privacy win—but one that Green worries could open the door to governments around the world making other demands that it alter the system to scan for content other than CSAM, such as political images or other sensitive data.

While the new CSAM detection features are limited to the US for now, Green fears a future where other countries, particularly China, insist on more concessions. After all, Apple has already previously acceded to China's demands that it host user data in Chinese data centers. "The pressure is going to come from the UK, from the US, from India, from China. I'm terrified about what that's going to look like," Green adds. “Why Apple would want to tell the world, ‘Hey, we've got this tool’?”

For now, Apple's new system represents a win, at least, for the fight against child abuse online—if one that's potentially fraught with pitfalls. "The reality is that privacy and child protection can coexist," NCMEC's president and CEO John Clark wrote in a statement to WIRED. "Apple’s expanded protection for children is a game-changer."

Just how much it changes the game for its users' privacy—and in what direction—will depend entirely on Apple's next moves.

UPDATE 8/11/21 1:40PM ET: This story has been updated to clarify that while Apple does encrypt iCloud photos, it does not currently use end-to-end encryption for iCloud.

Related Articles

Latest Articles