Published: 2022-06-03
An Extended Reply Regarding Auditing Anonymity Networks
This is an extended reply to: twitter.com/SarahJamieLewis/status/1532765267675074560 which itself is a reply to Auditing Anonymous Networking Software.
Integration - Mitigating Higher Level Issues
New anonymity networks really need to consider the security of higher level applications using them, in addition to the security of the networks themselves.
See also all of onionscan.org
As an illustrated example: Onion services are a cool technology that were (and in many cases still are) plagued by a variety of issues e.g.:
- Localhost Bypasses (see: Onionscan: How Bad are Apache mod_status Leaks Anyway?)
- Hostname Overrides (see: Onionscan: This one weird trick can deanonymize 25% of the “dark web”)
- Correlated Data (see: Onionscan: Revisiting CARONTE)
- Emergent centralization (see: Onionscan: Uptime, Downtime and Freedom Hosting II)
Some of these issues could have been mitigated at a lower level, either within the Tor process itself or in some kind of distributed client software featuring specially designed, and misuse-resistant, APIs.
Some starting questions:
- What applications might use the network?
- How will they integrate? Libraries? API?
- What protections are in place so that a naive user will not compromise their own security through a flawed integration?
Design to Development
How new ideas get evaluated and implemented is a critical part of the security lifecycle. While much of it comes down to people, the processes those people follow must be audited and, where possible, automated.
Code reviews, integration tests, fuzzing, continuous integration all generated artifacts that can be used to spot issues long before they become vulnerabilities.
Some starting questions:
- Are there formal definitions capturing the security of the system? Can there be checked against the design or implementation efficiently?
- How are new features, or patches to existing features, evaluated for security concerns? How are features checked against pre-existing formal security definitions.
- How are pull requests checked for issues?
- Are there any code quality metrics used? What are there failure cases?
- What parts of the system are easy to test for introduced flaws? What parts of the system are difficult to test for introduced flaws?
- How is code deemed authoritative? Are authoritative branches protected from arbitrary changes?
Development to Distribution
Development is hard, distribution is harder. Some starting questions:
- How is the software distributed? Through what platforms and protocols?
- Are builds reproducible? How can users trace source code to binaries?
- How are users notified of security updates? Are updates automatic?
- What happens if the website, seed node or automatic update mechanism is compromised?
See: Github Issue: Wrong hashes (from getmonero.org)
A Final Question: Documenting Risks
How are risks documented, tracked and accepted/mitigated? Some risks are impossible to full mitigate, but what mitigations exist can be written down and referenced when making decisions. Technologies and attackers change and evolve; it is vital that this kind of information be available for review in the future.
e.g. we have the Cwtch Security Handbook for this.