Published: 2021-08-16

Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI

Last week, Apple published more additional information regarding the parameterization of their new Fuzzy Threshold PSI system in the form of a Security Threat Model.

Security Threat Model Review of Apple’s Child Safety Features

Contained in the document are various answers to questions that the privacy community had been asking since the initial announcement. It also contained information which answered several of my own questions, and in turn invalidated a few of the assumptions I had made in a previous article.

Obfuscated Apples

In particular, Apple have now stated the following:

One might ask why if the false acceptance rate of NeuralHash is so low then why take such precautions when estimating t?

I will give Apple the benefit of the doubt here under the assumption that they really are attempting to only catch prolific offenders.

Even still, I believe the most recent information by Apple still leaves several unanswered questions, and raises several more.

On NeuralHash

To put it as straightforward as possible, 100.5M photos isn’t that large of a sample to compare a perceptual hashing algorithm against, and the performance is directly related to the size of the comparison database (which we don’t know).

Back in 2017 WhatsApp estimated that they were seeing 4.5 billion photos being uploaded to the platform per day, while we don’t have figures for iCloud we can imagine, given Apples significant customer base, that it is on a similar order of magnitude.

Connecting One Billion Users Every Day - Whatsapp Blog

The types of the photos being compared also matter. We know nothing about the 100.5M photos that Apple tested against, and only that a small 500K sample was pornographic in nature. While NeuralHash seems to have been designed as a generic image comparison algorithm, that doesn’t mean that it acts on all images uniformly.

On the Thresholds

Since this initial threshold contains a drastic safety margin reflecting a worst-case assumption about real-world performance, we may change the threshold after continued empirical evaluation of NeuralHash false positive rates – but the match threshold will never be lower than what is required to produce a one-in-one-trillion false positive rate for any given account - Security Threat Model Review of Apple’s Child Safety Features

Apples initial value of t = 30 was chosen to include a drastic safety margin, but the threat model gives them the explicit ability to change it in the future, but they promise the floor is still 1 in a trillion for “any given account”.

We still know very little about how s will be chosen. We can assume it will be in the same magnitude as t and that as such the number of synthetics for each user will be relatively low compared to the total size of their image base.

Also given that t is fixed across all accounts, we can be relatively sure that s will also be fixed across all accounts, with only the probability of choosing a synthetic match being varied on some unknown function.

Note that, if the probability of synthetic matches is too high, then the detection algorithm fails with high probability. Requiring more matches, and an extended detection procedure.

As an aside, if you are interested in playing with the Detectable Hash Function yourself I wrote a toy version of it

Threat Model Expansions

The new threat model includes new jurisdictional protection for the database that were not present in the original description - namely that the intersection of to ostensibly independent databases managed by different agencies in different national jurisdictions will be used instead of a single database (such as the one run by NCMEC).

Additionally, Apple have now stated they will publish a “Knowledge Base” containing root hashes of the encrypted database such that it can be confirmed that every device is comparing images to the same database. It is worth noting that this claim is only as good as security researchers having access to proprietary Apple code.

That such a significant changes were made to the threat model a week after the initial publication is perhaps the best testament to the idea, as Matthew Green put it:

“But this illustrates something important: in building this system, the only limiting principle is how much heat Apple can tolerate before it changes its policies.” - Matthew Green

Revisiting First Impressions

I think the most important question I can ask of myself right now is that if Apple had put out all these documents on day one, would they have been enough to quell the voice inside my head?

Assuming that Apple also verified the false acceptance rate of NeuralHash in a way more verifiable than :we tested it on some images, it’s all good, trust us!” then I think many of my technical objections to this system would have been answered.

Not all of them though. I still, for example, think that the obfuscation in this system is fundamentally flawed from a practical perspective. And, I still think that the threat model as applied to malicious clients undermines the rest of the system

See: A Closer Look at Fuzzy Threshold PSI for more details.

It’s About the Principles

And, of course, none of that quells my moral objections to such a system.

You can wrap that surveillance in any number of layers of cryptography to try and make it palatable, the end result is the same.

Everyone on Apple’s platform is treated as a potential criminal, subject to continual algorithmic surveillance without warrant or cause.

If Apple are successful in introducing this, how long do you think it will be before the same is expected of other providers? Before walled-garden prohibit apps that don’t do it? Before it is enshrined in law? Tweet

How long do you think it will be before the database is expanded to include “terrorist” content”? “harmful-but-legal” content”? state-specific censorship?

This is not a slippery slope argument. For decades, we have seen governments and corporations push for ever more surveillance. It is obvious how this system will be abused. It is obvious that Apple will not be in control of how it will be abused for very long.

Accepting client side scanning onto personal devices is a rubicon moment, it signals a sea-change in how corporations relate to their customers. Your personal device is no long “yours” in theory, nor in practice. It can, and will, be used against you.

It is also abundantly clear that this is going to happen. While Apple has come under pressure, it has responded by painting critics as “confused” (which, if there is any truth in that claim is due to their own lack of technical specifications).

The media have likewise mostly followed Apples PR lead. While I am thankful that we have answers to some questions that were asked, and that we seem to have caused Apple to “clarify” (or, less subtly, change) their own threat model, we have not seen the outpouring of objection that would have been necessary to shut this down before it spread further.

The future of privacy on consumer devices is now forever changed. The impact might not be felt today or tomorrow, but in the coming months please watch for the politicians (and sadly, the cryptographers) who argue that what can be done for CSAM can be done for the next harm, and the next harm. Watch the EU and the UK, among others, declare such scanning mandatory, and watch as your devices cease to work for you.



About This Site

This is a site where I dump essays, ideas, thoughts, math and anything else that doesn’t fit into another format, or isn’t yet ready for a longer paper. Beware: Ideas may be half thought through and/or full of errors. Hic sunt dracones.


Recent Articles

2023-03-30Retrospective: Winter of Pipelines
2022-12-31Change, Control, Habits, and Productivity
2022-10-05Exploit Disclosure: Turning Thunderbird into a Decryption Oracle
2022-06-03An Extended Reply Regarding Auditing Anonymity Networks
2022-05-14Ideas for a better IDE
2022-04-25Federation is still the Worst of All Worlds
2022-03-21A brief introduction to insecurity buttons
2022-02-28A Queer Kind of Hope
2022-01-16Private and Decentralized Human Readable Names with Fuzzy Message Detection and Delay Towers
2021-11-27Writing a Fuzzer for Nes Games
2021-11-08Defining (De)Centralization in a Useful Way (The thing you are supposed to be decentralizing is power)
2021-11-02Extending Fuzzy Message Detection to Groups
2021-09-09Rough Cut: Oblivious Transfer
2021-08-30Building a Home-made Hydrogen Line Telescope
2021-08-19NeuralHash, Semantics, Collisions and You (or When is a Cat a Dog?)
2021-08-16Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI
2021-08-12A Closer Look at Fuzzy Threshold PSI (ftPSI-AD)
2021-08-10Obfuscated Apples