Published: 2022-03-21

A brief introduction to insecurity buttons

Recently I have had a number of conversations with people regarding new features in Cwtch, one common thread throughout nearly all of them was the need to explain the concept of an “insecurity”-button.

Specifically, why a suggested feature idea would necessarily result in a ‘bad’ “insecurity”-button.

Simply put, an “insecurity”-button, in the context of a user-facing application, is a button that, when pressed, makes the user less secure than they previously were.

Probably the most well known example of an “insecurity”-button is the TLS warning pane present in all modern web browsers. These panes usually feature a button (or series of buttons) that allow a user to override a warning and browser the website despite a critical issue with the TLS connection. The ability for users to navigate these safely is still an area of active research..

Reeder, R.W., Felt, A.P., Consolvo, S., Malkin, N., Thompson, C. and Egelman, S., 2018, April. An experience sampling study of user reactions to browser warnings in the field. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).

Why have “insecurity”-buttons at all?

I want to state up front that not all “insecurity”-button are bad. In fact, I would argue the opposite - that they are essential for building secure, consentful software.

Cwtch has several “insecurity”-buttons under the “Experiments” heading in settings. Experiments enable new functionality at the cost of some quantifiable risk.

“insecurity”-buttons in Cwtch

Or to put it another way, these “insecurity”-buttons offer users a choice of what features they want to enable, while explaining the additional risks that such features inherently require.

One example is “Enable Group Messaging” which results in access to a protocol which exhibits different metadata privacy proeprties than P2P conversations because group messages require offline delivery, and offline delivery requires the involvement of an untrusted server component.

Another example is “Clickable Links” which causes URLs in cwtch messages to become “clickable” and (after a warning screen) allows opening those links in an external browser - which may reveal untold metadata to the server behind the URL, and many in between.

As a rough guide in Cwtch we believe that “insecurity”-buttons should only result in insecurity that:

If an “insecurity”-button would fail any of those points then it is deemed not-suitable for Cwtch. Everything under “Experiments” is strictly opt-in, and limited in scope. Being able to turn on experiments is itself a setting that must be turned on - and doing so is attached to a warning that experiments may have different privacy considerations.

Users can disable any, or all, experiments at any time.

What makes a bad “insecurity”-button?

Fundamentally, if an “insecurity”-button would violate any of the tenets above then it would be a bad idea.

To give an example: I was recently asked about a feature that would effectively create multiple classes of Cwtch Groups with very different censorship and security properties.

The big problem with the idea arose as a consequence of a choice the user would have to make at a server-level i.e. an “insecurity”-button - and more specifically, the scope and reversibility of that choice.

Unlike a regular feature Experiment which all have very obvious UI-level changes, this choice would only really impact the protocol itself, and the results would only be visible is they were explicitly made so by the UI.

The button was also irreversible, once it had been clicked for a given server then the users’ security concerning interaction with all groups hosted on a given server would be forever altered.

It also had significant impact on the security of pre-existing flows such that those flows would have to be altered and amended in order to not compromise their own security.

Ultimately, this button would have been a bad button.

Closing Thoughts

There is much more to write about the concept of “insecurity”-buttons - especially regarding how to build and use them in a way that maximizes the informed consent of users - but I hope that this brief introduction can provide some context in the next conversation I have where I reject an idea because it would result in a bad “insecurity”-button

About This Site

This is a site where I dump essays, ideas, thoughts, math and anything else that doesn’t fit into another format, or isn’t yet ready for a longer paper. Beware: Ideas may be half thought through and/or full of errors. Hic sunt dracones.

Recent Articles

2023-03-30Retrospective: Winter of Pipelines
2022-12-31Change, Control, Habits, and Productivity
2022-10-05Exploit Disclosure: Turning Thunderbird into a Decryption Oracle
2022-06-03An Extended Reply Regarding Auditing Anonymity Networks
2022-05-14Ideas for a better IDE
2022-04-25Federation is still the Worst of All Worlds
2022-03-21A brief introduction to insecurity buttons
2022-02-28A Queer Kind of Hope
2022-01-16Private and Decentralized Human Readable Names with Fuzzy Message Detection and Delay Towers
2021-11-27Writing a Fuzzer for Nes Games
2021-11-08Defining (De)Centralization in a Useful Way (The thing you are supposed to be decentralizing is power)
2021-11-02Extending Fuzzy Message Detection to Groups
2021-09-09Rough Cut: Oblivious Transfer
2021-08-30Building a Home-made Hydrogen Line Telescope
2021-08-19NeuralHash, Semantics, Collisions and You (or When is a Cat a Dog?)
2021-08-16Revisiting First Impressions: Apple, Parameters and Fuzzy Threshold PSI
2021-08-12A Closer Look at Fuzzy Threshold PSI (ftPSI-AD)
2021-08-10Obfuscated Apples