Pitfalls of mapping of multiple security factors into a single scale: The weakest link in the chain

Assurance Levels ..

  • Simplify the relationship between controls, vulnerabilities and assets;
  • Are a pragmatic method of standardizing baseline security policies to a singular security scale (usually of 1 to 4);
  • Are a quite rough approximation of the actual protection yielded by specified controls;
  • But this is not a problem, because risk assessment would anyway be too complex without that simplification;
  • Too granular controls in the specification of Assurance Levels  should be avoided. Instead, the installation of processes to adapt to changing threats should be aimed at, like with capability maturity models in each specific area.

Identity Assurance levels, like specified in NIST SP 800-63, map multiple security controls into a single scale.In a top-down approach controls are being grouped into functional components such as identity proofing, token strength and authentication protocol. The overall assurance level is then that of the weakest component, like a level 4 token does not yield the strongest level if the proofing process is only level 2. This is the obvious pattern of the weakest link in the chain. But how is weakness measured?

 

Let us stay with the chain metaphor. The plain chain itself is not a proper analogy, because it consists of links with almost identical properties. Thus we need to extend the scope to include all components that are needed to protect something. E.g., a chain to protect a bicycle's frame from being relocated has actually 3 different components: chain, lock and some fixation to latch the bike to. Now we can list the known threats to the protective mechanism, grouped by component:

  • Chain: Being teared up, cut up (by mechanic or thermal impact) or sawed out
  • Lock: Unlocked with the original stolen key, with the original lost key, with a lock pick or opened by force (options same as with chain). The key might be physical or a number.
  • Fixation: similar options like the chain, plus the option to remove the fixation as a whole, depending on weight.
  • "Outer protection": The risk of a thieve to get caught by citizens and police watching or selling the stolen bike and being penalized as a consequence.

To get the best protection it is necessary to balance the resistance of all threats to avoid a single weak component. Resistance against threats might be different with some controls, like a chain resistant against a bolt cropper might not resist a saw very well. Humans will do not so bad in the average when they need to protect a bicycle, fitting components with perceived equal strength and assessing the places where to leave a bike and transferring the remaining risk to an insurance. However, if we would like to assess the relative strength of the protection system on a single scale, we will not be able to compute that from the single factors, because there are multiple dimensions in it. The threat is at least 2-dimensional: On one direction the effort needed by thieve to crack a component, usually measured in time, the other direction being the determination of the attacker, measured in skill and equipment. 

Usability, expressed in weight, effort to lock and unlock, and possibility to recover a lost key is another factor that has some relationship to protection. If the users perceive the lock too cumbersome, they will tend to not use it if threat is perceived as being low or exposure short. As a consequence, there is not formal computation to map all factors into a single scale, but only an intentional oversimplification will do the job. Assessments of this type can either be performed as an educated guess by experts, or better by crowdsourcing, as argued by James Sourowiecki.

 

With Identity Assurance, threats are difficult to compare. To enroll with a stolen credential might be easy for one type of attacker, setting up a fake verifier and a MITM would be and easy attack for another. Repudiating a registration or authentication at court might be difficult or easy, but is hardly predictable. These attacks are barely comparable in "criminal energy units" (or "stupidity units" for the unintentional offender). The risk-management approach needs to assess, prioritize and mitigate the identified threats and vulnerabilites against a perceived average. Some uncertainty about the uncertainty is left.

 

LoA on the control level

The same problem as with the functional components mentioned above occurs again on a more detailed level, where more elementary security controls need to be categorized. E.g. relative cryptographic strength is expressed in symmetric key bits regardless of its properties, or authentication tokes are classified by factors and token types. Again we see the problem that properties of certain controls cannot directly compared and are multi-dimensional. Ho Suk Chung argues in his thesis that the single scale used by NIST for token types is less expressive in mapping security factors to levels, and introduces the SoS-model (Strength of Security), using a multi-dimensional approach and differentiating between relationships that have a "is stronger" relationship, and "is incomparable" relationship. 

 

Despite this analysis, it is not possible to have a conclusive and manageable risk model that maps to a certain assurance level.

 

First, there we have only a poor apprehension about the assets to be protected, because Levels of Assurance are defined on a general, abstract level without knowing what resources and related numbers of transactions to protect. The idea of LoA, particularly in federations, disrupts the process of risk management, where a single instance has control of the process. So threat as function of vulnerability and asset value is hardly known. Compare with the risk assessment methodology from NIST SP 800-30.

 

Second, there is a lack of shared experience that would allow risk managers to assess the viability and probability of attacks. Numbers of falsified passports from certain countries, exploits of certain loopholes in protocols, number of systems infected by a type of malware etc. can in many cases only be assessed after the fact, usually when appropriate measures already control the attack. Even an omniscient observer of the market for zero-day exploits would have a problem to assess the mitigation of these threats by certain controls because it is way to complex to map attacks, vulnerabilities and asset values for a large number of systems. 

 

When relying parties connect to networks with these policies in place, they can do only a very reduced risk management, because there is no direct control of the policy.  Nonetheless the objective is that of reducing complexity into a set of policies, one policy per LoA. Otherwise the interoperability between different systems would never scale, because expensive mapping and evaluation exercises would have to be performed before some applications, organizations or domains can be connected. A policy that defines the controls for a specific Level of Assurance needs to find a balance between conflicting goals:

  • Exercising control and defining minimal security standards
  • Usability and comprehensibility of the policy
  • Feasibility of implementing the controls (selling the cost and effort)
  • Flexibility to adapt to shifting probabilities and new types of threats
  • Interoperability with other risk mitigation systems

Room for improvment

Given the complexity of risk management in federations the simplification by LoA provides a pragmatic approach for risk mitigation. However the justification of controls is hard. It could be improved by using better data for the perceived threat, like crowd-sourced data, or CERT or security vendor sources.