How can you know if a security tool works?

Add to my custom PDF

Unfalsifiability of security claims

Declaring anything to be ‘secure’ is a risky proposition. Something can be shown to be insecure but not the opposite. Hence, claims of security are impossible to prove wrong empirically. This results in situations where nothing is secure, no countermeasure is unnecessary, and we are left to unsystematically accumulate defenses in an impossible struggle. It’s unsurprising that some simply give up on security.

As Herley explains, it is impossible to prove if a defensive measure is secure for a number of reasons. The future is not certain, not all attacks have been attempted against all systems, and not all attacks that will exist have been invented. Consequently, no amount of use without something bad happening rules out the possibility that a bad outcome has simply not happened yet. Because of this possibility, we are unable to test any security measure, as we are unable to observe that it will always be effective in all possibilities. Any condition claimed as being necessary for security is as such untestable, making it impossible to prove as it being otherwise. In order to make assertions in the face of uncertainty, we can make claims based on assumptions. For example, we might say that random passwords of length of more than 40 characters are secure against guessing. This is not an observation, but a deduction based on an assumption about attacker limitations. However, deductive claims are limited to their premises and cannot be generalized. In this case, a 40 character password is only secure against guessing if and while our assumption about the attacker’s limitations are true. As we cannot say with certainty whether the assumption is true or not, we cannot validate or falsify if the measure is needed. Further, the claim only refers to security only in as much as password guessability relates to security outcomes and not more generally.

We can define security so that certain things are necessary, but this does not allow us to conclude anything about outcomes. Reality may not coincide with our assumptions about what will occur. For example, if we define a password of greater than six characters as necessary for security, we are forced to assume that an attacker can and will attempt to guess all such passwords. If no attempt is made to guess all possible passwords, a fivecharacter password may be as secure; however, it is impossible to be sure. This results in conditional security claims. If either the claim or the condition is vague, such as ‘given a sufficiently motivated attacker’ then we can never convincingly refute the claim. The inability to test claims means there is no way to discover if they are wrong.

Speaking of necessary conditions implies a binary security view: things are either secure or not. A necessary condition is a universal generalization about the things that are. There are many cases where the ineffectiveness of a security measure may not impact the actual experience of security. The ineffectiveness of any part of a defence-in-depth measure is irrelevant unless the main defence fails. A vulnerability might not be exploited if it is undiscovered or relatively expensive as attackers can adapt. If the rate of occurrence of an attack is sufficiently low, the effective outcome of not defending against it may be difficult to observe.

Despite this, we must take some steps towards being more secure. One approach is to start with a set of security goals that are to be met in order to be sufficiently protected from bad outcomes. The goals might be arrived at based on assumed or observed attacker capabilities, or a threat modelling exercise. That the goals are sufficient to avoid bad outcomes, can be falsified by finding an outcome not considered when devising the goals. This happens when an attacker ‘steps outside’ the model and uses an attack that hasn’t been considered, or wasn’t previously known. Thus, in this approach, the claim that those goals are sufficient can be falsified, but the claim that they are necessary cannot. The general response to this problem is an ever expanding set of goals and an unending search for attack opportunities.

Since there is no mechanism for rejecting measures, they accumulate over time and waste becomes inevitable. The idea of allowing all unfalsifiable claims seems unworkable, as it is incompatible with a limited budget for countermeasures. However, we lack a mechanism for ordering unfalsifiable claims by importance. Implementing anything short of all of them must be done in an unsystematic way. Without testable claims, and consequently nothing to compare, we end up balancing assumptions. While neglecting any defense might be an unacceptable risk for some, most Internet users confronted with impossible lists of security measures appear to simply tune out.

Nothing is secure but there is no way to disprove claims about security. To limit waste, we must be careful to not mistake sufficient security measures for necessary security measures.