Published: 2021-08-10

Obfuscated Apples

Generating noise in a way which is indistinguishable from real signal is a ridiculously hard problem. Obfuscation does not hide signal, it only adds noise.

if you take anything away from this article please let it be this fact.

NOTE FROM THE FUTURE: Many of the assumptions used in this article were made before Apple put out further information about how they chose certain parameters. In particular, Apple have now stated they over-estimate the use of icloud by several orders of magnitude in order to derive the 1 in 1 trillion number.

The below analysis is based on numbers extrapolated from previous technical summaries, and not from more recent claims by Apple. Later articles on this site dive into those specific claims further.

Sadly, most people operate under the assumption that adding noise to a system is all that it takes to make the signal unrecoverable. This logic is very clearly in operation in Apple’s new proposal for on-device scanning technical summary which, among other things, proposes generating synthetic matches to hide the true number of real matches in the system.

I want to take this opportunity to break down how this kind of obfuscation can be defeated even when not considering the fact that it is Apple themselves who are charged with generating and maintaining the safety parameters of the system.

i.e. even if we treat the people who design and build this system as honest adversaries.

Sketching a Basic Scheme

For the sake of clarity I will omit the technical details of the private set intersection protocol, and the threshold scheme, and we will operate under the assumption that both are cryptographically secure. We will also assume that the database of images to compare is incorruptible This is clearly not the case..

At the heart of system is a (mostly) black box that contains a perceptual hash function that analyzes an image and spits out a hash, this hash is then compared against a database of known hashes and if a match is found the system reports true and otherwise reports false. As we will see later on, perceptual hashes are not cryptographic hashes.

Throughout this article I will use the term match when talking about both true and false positives, though I will mostly assume any matches are false positives.

According to documentation provided by Apple, the server learns any matches occurred on the phone during the PSI protocol.

“The output of PSI protocol on the server reveals whether there is a match or not” - Apple Technical Summary

When a certain threshold of matches are reached, the server gains the ability to decrypt all data associated , a human reviews that data, and a determination is made.

As presented the system above has one major flaw (besides the gross nature of co-opting a personal device as a surveillance system): the server learns how many matches the device has reported prior to being able to decrypt those matches.

This is obviously very important metadata in the context of the system and as such needs to be protected - if it is not then Apple, or someone who can compel Apple to release the data, can identify potential targets based on this metadata. As we shall soon discuss targeting people in way would be highly irrational if your goal was to actually hunt people doing harm, but people are not rational actors.

To protect this data Apple relies on the invocation of so-called “Synthetic Vouchers” with the following property:

“The probability that a device uploads a synthetic voucher instead of a real voucher for an image is calibrated to ensure the total number of synthetics is of the same order of magnitude as the threshold”

Deriving Probabilities

So, that is it right? Problem averted? Let’s not be too hasty…

From what we know so far there are a few interesting parameters in this system that Apple must determine values for.

There is the threshold t of matches necessary to decrypt the data, there is the probability of a device generating a synthetic match P(synthetic) and there is the probability of a false positive match P(falsepositive).

We also know that Apple has constructed these parameters such that the probability of an account being flagged for human review (i.e. when number of matches M > t is P(flag) = 1e−12 or one in one trillion.

“The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account.” - Apple Technical Summary

We can actually work backwards from that number to derive P(falsepositive):

$$P(\texttt{flag}) = \sum_{\substack{x = t}}^T {T \choose x} \cdot P(\texttt{falsepositive})^x \cdot P(1-\texttt{falsepositive})^{T - x} \approx 1\mathrm{e}^{-12}$$

In order to finalize this we only need to make educated guesses about 2 parameters: the threshold value, t, and the total number of photos checked per year, T. Apple throws out the number t = 10 in their technical summary, which seems like a good place to start.

Assuming an average account generates 3-4 pictures a day to be checked then T ≈ 1278 over a year. Plugging in those numbers, and we get P(falsepositive) ≈ 0.00035 or 1 in 2858.

Does that number have any relation to reality? There is evidence to suggest Adversarial Detection Avoidance Attacks: Evaluating the robustness of perceptual hashing-based client-side scanning. Shubham Jain, Ana-Maria Cretu, Yves-Alexandre de Montjoye that the false acceptance rate for common perceptual hashing algorithms is between 0.001-0.01 for a database size of 500K.

That makes our guesstimate of 0.00035 an order of magnitude smaller than the most generous empirical estimate. We will be generous and assume Apple broke some new ground with NeuralHash and 0.00035 represents a major improvement in perceptual hashing false acceptance rates.

Given that we can go back and calculate the probability of observing, P(match), a match each day…

$$P(\texttt{match}) = 1 - (( 1 - {0.00035})^{3.5}) \approx {0.001225} \approx \frac{1}{{816}}$$

Or, a match once on average every 816 days for a person that only stores 3-4 photos per day.

Not everybody is every person though, if we applied the same P(falsepositive) to a new parent who takes upwards of 50 photos per day, then their P(match) is:

$$P(\texttt{match}) = 1 - (( 1 - {0.00035})^{50}) \approx {0.01735} \approx \frac{1}{{57}}$$

Or, a match on average every 57 days.

At this point I feel compelled to point out that these are average match probabilities. For the prolific photo taking parent who takes 18250 photos a year, the probability that they actually exceed the threshold in false matches is 6% assuming t is 10.

It is also worth mentioning that even though we ballparked t and T there are explicit constraints on what their values can be. If Apple generates a single t for all accounts, then T needs to be an approximation on the average number of photos an account stores per year. If Apple generates a different t value for every account, then it has enough information already to derive P(observation) and break its own obfuscation.


Using what we now know we can assess the server side operations show how the observer can calculate the probability of a real match given the probability of any q observation and the probability of a synthetic match.

The Probability of Synthetic Matches

Before we go any further, we should address one particular complication. Apple appears to suggest that images are probabilistically replaced with synthetic matches:

“the device occasionally produces synthetic vouchers for images as opposed to ones corresponding to their image”

Given that, the rate of real matches v.s. synthetic matches isn’t independent. Actual matches might be replaced by synthetic matches.

$$P(\texttt{observation}) = (P(\texttt{match}) \cdot (1 - P(\texttt{synthetic}))) + ((1-P(\texttt{match})) \cdot P(\texttt{synthetic})) \vphantom{+ \frac{1}{1}}$$

Or, to put it another way, the probability of a match being reported as a match is dependent on the probability it isn’t reported as a synthetic. Either way, P(observation|match) = 1

Further we can actually make a guess at the value of P(synthetic) under the assumption that it is calculated globally. Since Apple have stated that P(synthetic) is dependent on t and is designed such that it generates synthetic matches in the same order or magnitude as t we can derive P(synthetic) such that each device generates t synthetic matches a year on average.

Using our numbers from earlier we can place P(synthetic) ≈ 0.01 which would mean that over the course of a year, an average account storing 3-4 messages a day would have ~70% chance of generating 10 or more synthetic vouchers.

The exact value doesn’t really matter for our purposes. Any order of magnitude greater than 0.01 results in too many synthetic matches, and any order of magnitude smaller results in too few.

Calculating Synthetic Probabilities

Given what we know about the probabilities in this system we can now piece together a server-side attribution attack to break the privacy provided by synthetic matches.

$$P(\texttt{match}| \texttt{observation}) = \frac{P(\texttt{observation}|\texttt{match}) \times P(\texttt{match})}{P(\texttt{observation})}$$

We know that all matches will result in an observation and so…

$$P(\texttt{match}| \texttt{observation}) = \frac{1 \times P(\texttt{match})}{P(\texttt{observation})}$$

Or more simply:

$$P(\texttt{match}| \texttt{observation}) = \frac{P(\texttt{match})}{P(\texttt{observation})}$$

Given that the probability of a synthetic match is defined by Apple, the only unknown in the system is the probability of a match.

We can now plug in our values from earlier. Remember, Apple has stated that the actual probability of observing t false positive matches, P(flag), is 1 in a trillion and as such we have been able to derive approximate probabilities for false positives.

For an “average” account that stores 3-4 photos per day we know that P(match) =  ≈ 0.001225, allowing Apple, who defines P() to calculate:

$$P(\texttt{match}| \texttt{observation}) = \frac{(0.001225 \cdot 0.99)}{(0.001225 \cdot 0.99) + (0.998775 \cdot 0.01))} \approx 0.11 $$

Given that we are aiming for ~10 synthetics over the course of a year, an 11% probability of any observation being a real match seems about the right level of indistinguishably.

But, what about our prolific “parent” account that stores 50 photos per day? We know that P(match) ≈ 0.01735, allowing Apple, who defines P(synthetic) to calculate:

$$P(\texttt{match}| \texttt{observation}) = \frac{(0.01735 \cdot 0.99)}{(0.01735 \cdot 0.99) + (0.98265 \cdot 0.01))} \approx 0.63 $$

That is a 63% probability that any reported match is a real match and not a synthetic one!

If Apple define a global P(synthetic) then different accounts will naturally have different server-side distributions of observations, and these can be used to tighten the estimates of true matches.

And, again, if Apple can define P(synthetic) on a per-account basis then they have more information to use when tightening these estimates

The secrecy of this metadata is then is paradoxically dependent on both Apple never deriving P(observation) for themselves and on Apple generating a distinct P(synthetic) for each account. Or rather, the privacy of one of the most sensitive aspects of this system requires Apple both collecting no information on accounts, and also on Apple knowing enough about accounts to derive the parameters necessary to keep the information private.


It is actually much worse than that though.

While priors may start as an unknown (setting aside the fact that Apple already has enough data to derive this themselves), Apple quickly generates a large amount of data relating to when new observations are made. Since people are different in the ways these take and store photos and live in different parts of the world, the exact probability of them triggering a check is dependent on them.

There are also additional discriminating events in the system itself.

Matches over Threshold without Decryption

One of the most obvious sources of discriminating information is built explicitly into the design. The threshold scheme as proposed requires t real matches in order to decrypt the inner envelopes containing the matches images.

As such. every time a new match is observed and t is over the threshold the system learns additional information regarding the makeup of previous matches.

To illustrate, if we set the t = 10, then after the system observes 10 matches (made up of a number of an unknown number of real or synthetic matches) then every additional match until decryption can be used to derive information about the previous matches. If the next match to arrive does not allow decryption then the server can derive that there are at least 2 synthetic matches in the bucket. This confidence increases with every observation that does not result in decryption.

Combining that with our bayesian estimates fom earlier we can see how an adversarial observer could update their estimates of P(synthetic) based on the total number of observations, and the fact that no decryption has taken place.


What does this all mean?

In this analysis we have deliberately left out other information that Apple, or someone who can compel Apple, may use to tighten these estimates e.g. derived from public social media feeds.

In reality, an adversary will have far greater access to auxiliary data about the environment and target sets than simply raw probabilities.

In that kind of environment, no amount of server-defined obfuscation is enough to protect the metadata that the server holds.

In this case that metadata is a rather controversial number i.e. the number of possible matches to illegal material detected on the device.

That is interesting metadata to countless entities including the law enforcement and intelligence agencies of multiple jurisdictions and states.

Even if we strictly limit the type of material that Apple is searching for, the high likelihood of false positive events combined with the ease at which Apple can likely distinguish true matching events from synthetic events (as worked through above) should concern any potential subject of the system.

Innocence is no defense against judgements made using derived metadata.




About This Site

This is a site where I dump essays, ideas, thoughts, math and anything else that doesn’t fit into another format, or isn’t yet ready for a longer paper. Beware: Ideas may be half thought through and/or full of errors. Hic sunt dracones.


Recent Articles

2023-03-30Retrospective: Winter of Pipelines
2022-12-31Change, Control, Habits, and Productivity
2022-10-05Exploit Disclosure: Turning Thunderbird into a Decryption Oracle
2022-06-03An Extended Reply Regarding Auditing Anonymity Networks
2022-05-14Ideas for a better IDE
2022-04-25Federation is still the Worst of All Worlds
2022-03-21A brief introduction to insecurity buttons
2022-02-28A Queer Kind of Hope
2022-01-16Private and Decentralized Human Readable Names with Fuzzy Message Detection and Delay Towers
2021-11-27Writing a Fuzzer for Nes Games
2021-11-08Defining (De)Centralization in a Useful Way (The thing you are supposed to be decentralizing is power)
2021-11-02Extending Fuzzy Message Detection to Groups
2021-09-09Rough Cut: Oblivious Transfer
2021-08-30Building a Home-made Hydrogen Line Telescope
2021-08-19NeuralHash, Semantics, Collisions and You (or When is a Cat a Dog?)
2021-08-16Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI
2021-08-12A Closer Look at Fuzzy Threshold PSI (ftPSI-AD)
2021-08-10Obfuscated Apples