The Apple CSAM Scanning Discussion – Part One

By: Dr Tristan Jenkinson

Introduction

In June this year, Apple CEO Tim Cook stated that “At Apple, we believe that privacy is a fundamental human right”, echoing a statement that he made back in 2015.

This latest interview followed in the footsteps of an ad campaign which stated “Privacy – that’s iPhone”, which focused on App Tracking Transparency – advising Apple customers that they could “Choose who tracks your information… And who doesn’t”.

Apple’s latest move has concerned many privacy advocates and angered a lot of their own client base. The issue stems from Apple’s announcement that they plan to scan data on user’s devices for indications of Child Sexual Abuse Material (“CSAM”).

The subject is a difficult one that has polarised opinion. To be clear, those complaining are not looking to defend those storing or sharing such material – the issue is that the methodology being proposed is potentially open to abuse.

At a high level, once you create a system to allow scanning for one type of data (such as CSAM), there is very little that needs to be done to search for other types of material. There are plenty of parties who could put Apple under tremendous pressure to do so.

There is therefore a real concern that this is the start of a slippery slope through diminishing levels of privacy.

Apple’s recent privacy history

Many people view privacy as one of Apple’s core strengths. A recent article from CNBC talks about how Apple’s privacy focus was gaining them a business advantage.

In the main, Apple’s systems are designed so that Apple themselves cannot access user’s content. Those who have followed the long running debates on data privacy and encryption will likely be familiar with the battle between the FBI and Apple over the unlocking of the iPhone used by one of the shooters in the San Bernadino terrorist attack in December 2015 and the follow up case in 2019 in which Apple were again asked to provide assistance in gaining access to data stored on iPhones – this time two iPhones belonging to the shooter at a Florida naval base.

In both these cases, Apple declined to provide backdoors to the data on the systems, arguing that the implementation of backdoors for law enforcement would weaken security for everyone using those devices and could be exploited by criminals.

Following the latter of these cases, there are two incidents that feel particularly worthy of mention – an open letter signed by Bill Barr, and statements made by US Senator Lindsay Graham.

In October 2019, former US Attorney General Bill Barr (alongside Priti Patel)  signed an open letter addressed to Facebook CEO Mark Zuckerberg, but apparently aimed more widely to Big Tech, which stated (my emphasis added):

“Security enhancements to the virtual world should not make us more vulnerable in the physical world. We must find a way to balance the need to secure data with public safety and the need for law enforcement to access the information they need to safeguard the public, investigate crimes, and prevent future criminal activity. Not doing so hinders our law enforcement agencies’ ability to stop criminals and abusers in their tracks.

Companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes. This puts our citizens and societies at risk by severely eroding a company’s ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions, preventing the prosecution of offenders and safeguarding of victims.”

Facebook responded (covered by the New York Times and PC Magazine) explaining:

“The ‘backdoor’ access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm… ”

“It is simply impossible to create such a backdoor for one purpose and not expect others to try and open it”

“People’s private messages would be less secure and the real winners would be anyone seeking to take advantage of that weakened security. That is not something we are prepared to do.”

In December 2019, representatives from Apple and Facebook testified to a Senate hearing about encryption. At that hearing Senator Lindsay Graham stated:

“We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion” and;

“You’re going to find a way to do this or we’re going to go do it for you”.

The following month, Reuters reported that Apple had dropped a plan to apply end-to-end encryption for iCloud backups because the FBI complained.

Then late last week (5 August 2021), Apple announced that they would start scanning users’ data for evidence of CSAM.

What have Apple announced?

Apple have actually announced three areas where they are looking to enhance child safety. Details can be found here – https://www.apple.com/child-safety/.

I have seen a number of discussions where parties have only been aware of one of the areas and have suggested that others have misunderstood – so it is important to understand the three areas and I would highly recommend reading through the detail provided by Apple.

In brief the three areas are:

  • Communication safety in Messages
    • New communications tools which “will enable parents to play a more informed role in helping their children navigate communication online”.
    • Where accounts are set up as families, the Apple Messages app will include the ability to warn children (and their parents) about explicit photos, either sent or received.
    • Received content will be blurred and children warned that may not want to view it.
    • If the child agrees to see it, then their parents will be notified (depending on settings).
    • Similarly with outgoing messages, children will be warned and if sent parents will be sent a notification message (depending on settings).
    • Potentially explicit items are identified as such using machine learning on the device itself.
  • CSAM Detection
    • Apple intend to use a new image hashing system, referred to as NeuralHash to hash all images to be uploaded to iCloud photos.
    • A “database of known CSAM image hashes provided by NCMEC (National Centre for Missing and Exploited Children) and other child safety organizations” will be stored on each device.
    • Scanning appears to be made immediately prior to the file being uploaded to iCloud and will be performed on the device itself. NeuralHashes are calculated for images, which are then compared to the known list.
    • A “safety voucher” is created for the image which encodes whether the file is a match or not, as well as other information about the file (some suggest this is likely to include a thumbnail image). The safety voucher is then uploaded to iCloud with the image.
    • When an undisclosed threshold of matches is reached, the information in the safety vouchers is made available to Apple.
    • Apple then reviews the information and confirms if there is a match to CSAM material. If there is then the user’s account will be disabled and a report made to the NCMEC.
  • Expanded information in Siri and Search
    • The least controversial and discussed area (by some distance).
    • Additional resources will be made available in Siri and Search to support children and parents.
    • “For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report”.

Don’t other cloud services already scan for CSAM?

One point that has been made a number of times in online discussions is that other providers of cloud services have been scanning for similar material for some time. This is true, for example Google scans for CSAM content across its hosted content including YouTube.

However, Apple’s approach, as documented, differs in two key respects:

  • Scanning of images is taking place on users devices, rather than on the cloud infrastructure owned by the provider
  • Messages are being scanned and analysed using machine learning – again on device

Both these points seem to be a significant departure from Apple’s previous posture on security. Because such analysis is being performed on the devices, it could be used to effectively undermine the security offered by end-to-end encryption.

Apple have previously indicated that that they could not decrypt data or access data stored by users on their Apple devices. The current (at the time of writing) privacy statement on https://www.apple.com/uk/privacy/features/ states:

“Our products and features include innova­tive privacy technologies and techniques designed to minimise how much of your data we — or anyone else — can access.”

And specifically in relation to photos (my emphasis added):

“Face recognition and scene and object detection are done completely on your device rather than in the cloud. So Apple doesn’t know what’s in your photos. And apps can access your photos only with your permission.”

Going back to 2015 and the FBI v Apple case relating to the San Bernadino shooter, in response to requests for Apple to assist the FBI in accessing the data, Apple responded:

“Among the security features in iOS 8 is a feature that prevents anyone without the device’s passcode from accessing the device’s encrypted data. This includes Apple”

And in an interview with NPR back in 2015, Tim Cook highlighted his stance on privacy stating:

“…instead of us taking that data into Apple, we’ve kept data on the phone and it’s encrypted by you. You control it”

“I think everybody’s coming around also to recognizing that any back door means a back door for bad guys as well as good guys. And so a back door is a nonstarter. It means we are all not safe. … I don’t support a back door for any government, ever.”

The Main Concern

The decision from Apple was causing concern from those in the cybersecurity and privacy worlds, even before it was announced. Matthew Green, a cryptographer and security technologist and Associate Professor of Computer Science at Johns Hopkins Information Security Institute tweeting before the announcement :

“I’ve had independent confirmation from multiple people that Apple is releasing a client-side tool for CSAM scanning tomorrow. This is a really bad idea.”

One of the main concerns is that personal photos and messages are being scanned and analysed on people’s personal devices. Apple currently is only looking for material relevant to CSAM, but this is effectively a surveillance system – albeit one that is currently programmed to report on a specific data set. Now that the system is in place, it is likely that Apple could face pressure to look for additional data. They will not be able to state that this is not possible – which has been their previous rebuttal to such requests (such as in the San Bernadino case). This is, effectively, the back door that Tim Cook promised he would never support.

The view that the approach amounts to a back door is shared by many in the privacy and technology sector, with the CDT (Center for Democracy and Technology) perhaps putting it most bluntly, stating:

“The mechanism that will enable Apple to scan images in iMessages is not an alternative to a backdoor — it is a backdoor”

Similarly the Electronic Frontier Foundation (EFF) state:

“… at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor”

Let us return to the open letter signed by Bill Barr in 2019 (discussed above). There he gave an effective wish list of data that the DoJ wanted to detect and respond to:

child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions”

(Let’s skip over the investigation of concerns of domestic attempts to undermine democratic values and institutions – that’s another article.)

Now that the first on the above wish list has been checked off, Apple will likely face governmental pressure to add the others. This is a point that we will return to shortly.

Additional types of data that governments (or other bodies) may pressurise Apple to include may not stop at the above. Once in place, the systems described by Apple could be trained to look for any type of data, including negative comments about a government or leadership, political or religious beliefs, or LGBTQ+ content. The approach could be used to effectively censor any “undesirable” behaviour.

As explained by the EFF:

“That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.”

Responses

Notable responses include Edward Snowden, who tweeted:

“No matter how well-intentioned, @Apple is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs—*without asking.*”

Famed DFIR guru Brett Shavers who tweeted:

“There is LITERALLY no difference in “scanning” your underwear drawer for criminal evidence without a warrant than @apple scanning your photos for criminal evidence.”

Sarah Jamie Lewis (Executive Director of Open Privacy), makes some very good points including the below:

“Really is disappointing how many high profile cryptographers actually seem to believe that “privacy preserving” surveillance is not only possible (it’s not) – but also somehow “not surveillance” (it is).”

An open letter has been published which has comments from a number of experts and is highly recommended reading – you can see it here: https://appleprivacyletter.com/.

Apple Respond to Criticism

Apple responded to the mounting criticism, releasing an FAQ document which can be found here: https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Frequently_Asked_Questions.pdf.

There are a few points of particular note.

The first is Apple’s response to “Does this break end-to-end encryption in Messages?” Apple explain that (in their view):

“No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature”.

While the above is true – at current Apple does not gain access to the communications – Apple’s viewpoint does not tally with many of those in the privacy community, who consider that scanning content is not really in the spirit of end-to-end encryption. Coming back to the CDT article discussed earlier, the CDT stated:

“These new practices mean that Apple will no longer be offering fully end-to-end encrypted messaging through iMessage and will be undermining the privacy previously offered for the storage of iPhone users’ photos”

Perhaps the point from Apple’s response that has been most commented is the question (and answer to): “Could governments force Apple to add non-CSAM images to the hash list?”, demonstrating that (as discussed above) this is really one of the biggest concerns about the approach.

I include Apple’s response to the question in full below:

“Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.”

Is Apple’s Response Enough?

Perhaps the biggest question in response is “Can Apple refuse?” shortly followed by “Would Apple ALWAYS refuse?”

As discussed above, by putting this system in place, it is possible to scan for other content with minimal changes. We have previously seen, in the FBI v Apple case over the San Bernadino iPhone that the court could be used to try to force Apple to assist. One of the biggest points in favour of Apple not assisting the FBI was that they could not, because of the security designed for the system.

So how would things play out if there were moves through the court to legally force Apple to make changes to their approach? Apple can now no longer rely on not being able to access data or grant requests because it is not possible for them to do so, as has been the case in the past.

Based on the reporting from Reuters on the dropped plan to implement end-to-end encryption for iCloud, we have seen that Apple does accede to requests from law enforcement bodies. It is also important to note that Apple has to act in accordance with the law (in whichever countries they operate) – for example historically not including FaceTime on iPhones sold in the UAE, where it was not a legally approved VoIP application.

History also points to other cases where Apple has apparently reached agreements with governments, including with China – for example when reportedly removing apps from the app store, actions which have led to concerns from US Senators that “Apple may be enabling the Chinese government’s censorship and surveillance of the Internet”.

More recently, in May 2021 the New York Times reported on an Apple data centre being built in Guiyang. According to the NYT report:

  • Encryption technology in place elsewhere in the world was not put in place because it was not allowed by China
  • Decryption keys are stored within the datacentre with the data they protect
  • Chinese state employees manage the servers

In addition, the article states that:

“…in its data centres, Apple’s compromises have made it nearly impossible for the company to stop the Chinese government from gaining access to the emails, photos, documents, contacts and locations of millions of Chinese residents, according to the security experts and Apple engineers”.

The article also discusses information that offers:

“… an extensive inside look – many aspects of which have never been reported before – at how Apple has given in to escalating demands from the Chinese authorities”

Returning to the words of Sarah Jamie Lewis:

“Apple put out an FAQ *over the weekend* that basically says “we pinky swear we won’t abuse the system and we will refuse demands to compromise the system” so I am sure that’s definitely reassuring to people.”

Conclusion

This is a difficult area, and no one is denying that we want to identify those individuals who have involvement with these sorts of images, and ensure that they face justice. However, the methodologies used to do this are important.

There have long been battles between privacy, encryption and investigation of data. The new Apple approach may well have been well intentioned, and it has introduced many to new ideas, such as Private Set Intersection and Threshold Secret Sharing. The system does however seem to have potential flaws – Apple are now asking their customers to trust them on their word that they will not scan for additional types of material. Given the history of governments putting pressure on big tech companies and the potential use of legal means to force their compliance, many do not feel that this is enough.

Ultimately, will the concerns about how Apple’s approach is being implemented have an impact? In the long term, it seems unlikely, but it is clear that Apple have worried many privacy and technology experts and frustrated many of their own customers by their move away from not just industry standards and norms, but from the staunchly pro-privacy stance that Apple has previously demonstrated.

There is more to come on this topic, there are a few additional points that I want to make, but they are probably best saved for another article.

One thought on “The Apple CSAM Scanning Discussion – Part One

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.