Skip Navigation LinksFoundation / Foundation Research / Security Metrics / Security Metrics Evaluation Tool

Security Metrics Evaluation Tool

​A particular security metric may seem worthwhile. Collecting data for it may be simple and quick, and the metric itself may be easy to explain to senior management. The metric may also have serious flaws. Perhaps the data is scientifically unreliable or is easy to manipulate, or the metric has little connection to the organization's strategic mission or risk management concerns—in other words, the factors that matter most to senior management. To use a security metric most effectively, security professionals need an organized way to examine it across relevant criteria so that weaknesses in the metric can be corrected.

This Security Metrics Evaluation Tool (Security MET) is intended to help security professionals assess the quality of a given metric. It is a framework for discerning the strong and weak points of a security metric, based on criteria that matter to senior management. A metric that scores high on the Security MET would have a high degree of technical value (scientific merit), operational reasonableness (considering cost and time), and strategic relevance (link to organizational risks or goals). It would also be persuasive when presented to senior management.

You will rate your metric based on nine criteria. The criteria are grouped in three categories:

Technical Criteria - Category 1
  1. Reliability
  2. Validity
  3. Generalizability
Operational (Security) Criteria – Category 2
  1. Cost
  2. Timeliness
  3. Manipulation
Strategic (Corporate) Criteria – Category 3
  1. Return on Investment
  2. Organizational Relevance
  3. Communication

Each criterion is explicitly defined. For each criterion, you will assign the metric a score of 1 to 5 with defined anchors for scores of 1, 3, and 5. Choose a score of 2 or 4 if the correct answer lies between the anchors. Examples after each criterion show how scoring might be applied.   

Score Sheet

A score sheet is presented to tabulate the metric's score across the nine criteria.

Lower scores on particular criteria show where a metric has room for improvement. Total scores may be useful for comparing one metric to another.

Criterion

Score

1. Reliability 
2. Validity 
3. Generalizability 
Technical Total 
4. Cost 
5. Timeliness 
6. Manipulation 
Operational (Security) Total 
7. Return on Security Investment 
8. Organizational Relevance 
9. Communication 
Strategic (Corporate) Total 

TOTAL ACROSS ALL NINE CRITERIA:
Technical + Operational + Strategic

 


The numbers on this score sheet, taken from the preceding pages, should provide insights into whether the evaluated metric is strong or weak when measured against specific criteria that matter. Low scores point out areas where a metric needs improvement. After making adjustments to the metric, the user might wish to administer the Security MET again and see if the score rises.

Every criterion is important. For example, a metric could receive high scores on eight of the nine criteria but still be fatally flawed if it scored a 1 on cost.

Scores on the Security MET criteria point out areas where a particular metric may need to be strengthened. The total score may suggest how close the metric is to attaining the highest possible score (45), but it is not likely to be useful for comparing different metrics, as the scoring would be different for users in different organizations.

The Security Metrics Evaluation tool was developed through research funded by a grant from the ASIS Foundation and performed by Global Skills X-change (GSX) and Ohlhausen Research, Inc., 2013-2014.

Next: Library of Evaluated Metrics